00:00:00.001 Started by upstream project "autotest-nightly" build number 3912 00:00:00.001 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3911 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.005 Started by upstream project "autotest-nightly" build number 3909 00:00:00.005 originally caused by: 00:00:00.006 Started by user Latecki, Karol 00:00:00.007 Started by upstream project "autotest-nightly" build number 3908 00:00:00.007 originally caused by: 00:00:00.007 Started by user Latecki, Karol 00:00:00.142 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.143 The recommended git tool is: git 00:00:00.143 using credential 00000000-0000-0000-0000-000000000002 00:00:00.145 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.202 Fetching changes from the remote Git repository 00:00:00.204 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.258 Using shallow fetch with depth 1 00:00:00.258 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.258 > git --version # timeout=10 00:00:00.298 > git --version # 'git version 2.39.2' 00:00:00.298 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.318 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.318 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:05.642 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.654 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.665 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:05.665 > git config core.sparsecheckout # timeout=10 00:00:05.677 > git read-tree -mu HEAD # timeout=10 00:00:05.694 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:05.713 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:05.713 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:05.824 [Pipeline] Start of Pipeline 00:00:05.837 [Pipeline] library 00:00:05.839 Loading library shm_lib@master 00:00:05.839 Library shm_lib@master is cached. Copying from home. 00:00:05.853 [Pipeline] node 00:00:05.860 Running on VM-host-SM17 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.861 [Pipeline] { 00:00:05.870 [Pipeline] catchError 00:00:05.872 [Pipeline] { 00:00:05.883 [Pipeline] wrap 00:00:05.894 [Pipeline] { 00:00:05.900 [Pipeline] stage 00:00:05.901 [Pipeline] { (Prologue) 00:00:05.917 [Pipeline] echo 00:00:05.918 Node: VM-host-SM17 00:00:05.923 [Pipeline] cleanWs 00:00:05.930 [WS-CLEANUP] Deleting project workspace... 00:00:05.930 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.936 [WS-CLEANUP] done 00:00:06.144 [Pipeline] setCustomBuildProperty 00:00:06.218 [Pipeline] httpRequest 00:00:06.244 [Pipeline] echo 00:00:06.245 Sorcerer 10.211.164.101 is alive 00:00:06.253 [Pipeline] httpRequest 00:00:06.256 HttpMethod: GET 00:00:06.257 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:06.258 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:06.273 Response Code: HTTP/1.1 200 OK 00:00:06.274 Success: Status code 200 is in the accepted range: 200,404 00:00:06.274 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:11.432 [Pipeline] sh 00:00:11.706 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:11.718 [Pipeline] httpRequest 00:00:11.733 [Pipeline] echo 00:00:11.734 Sorcerer 10.211.164.101 is alive 00:00:11.741 [Pipeline] httpRequest 00:00:11.744 HttpMethod: GET 00:00:11.745 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.745 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.752 Response Code: HTTP/1.1 200 OK 00:00:11.752 Success: Status code 200 is in the accepted range: 200,404 00:00:11.753 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:46.751 [Pipeline] sh 00:01:47.030 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:50.326 [Pipeline] sh 00:01:50.632 + git -C spdk log --oneline -n5 00:01:50.632 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:50.632 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:50.632 3731556bd lvol: declare g_lvol_if static 00:01:50.632 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:50.632 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:50.650 [Pipeline] writeFile 00:01:50.666 [Pipeline] sh 00:01:50.948 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:50.960 [Pipeline] sh 00:01:51.240 + cat autorun-spdk.conf 00:01:51.240 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.240 SPDK_TEST_NVME=1 00:01:51.240 SPDK_TEST_FTL=1 00:01:51.240 SPDK_TEST_ISAL=1 00:01:51.240 SPDK_RUN_ASAN=1 00:01:51.240 SPDK_RUN_UBSAN=1 00:01:51.240 SPDK_TEST_XNVME=1 00:01:51.240 SPDK_TEST_NVME_FDP=1 00:01:51.240 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.246 RUN_NIGHTLY=1 00:01:51.248 [Pipeline] } 00:01:51.281 [Pipeline] // stage 00:01:51.295 [Pipeline] stage 00:01:51.297 [Pipeline] { (Run VM) 00:01:51.312 [Pipeline] sh 00:01:51.590 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:51.590 + echo 'Start stage prepare_nvme.sh' 00:01:51.590 Start stage prepare_nvme.sh 00:01:51.590 + [[ -n 4 ]] 00:01:51.590 + disk_prefix=ex4 00:01:51.590 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:51.590 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:51.590 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:51.590 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.590 ++ SPDK_TEST_NVME=1 00:01:51.590 ++ SPDK_TEST_FTL=1 00:01:51.590 ++ SPDK_TEST_ISAL=1 00:01:51.590 ++ SPDK_RUN_ASAN=1 00:01:51.590 ++ SPDK_RUN_UBSAN=1 00:01:51.590 ++ SPDK_TEST_XNVME=1 00:01:51.590 ++ SPDK_TEST_NVME_FDP=1 00:01:51.590 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.590 ++ RUN_NIGHTLY=1 00:01:51.590 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:51.590 + nvme_files=() 00:01:51.590 + declare -A nvme_files 00:01:51.590 + backend_dir=/var/lib/libvirt/images/backends 00:01:51.590 + nvme_files['nvme.img']=5G 00:01:51.590 + nvme_files['nvme-cmb.img']=5G 00:01:51.590 + nvme_files['nvme-multi0.img']=4G 00:01:51.590 + nvme_files['nvme-multi1.img']=4G 00:01:51.590 + nvme_files['nvme-multi2.img']=4G 00:01:51.590 + nvme_files['nvme-openstack.img']=8G 00:01:51.590 + nvme_files['nvme-zns.img']=5G 00:01:51.590 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:51.590 + (( SPDK_TEST_FTL == 1 )) 00:01:51.590 + nvme_files["nvme-ftl.img"]=6G 00:01:51.590 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:51.590 + nvme_files["nvme-fdp.img"]=1G 00:01:51.590 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:51.590 + for nvme in "${!nvme_files[@]}" 00:01:51.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:51.590 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:51.590 + for nvme in "${!nvme_files[@]}" 00:01:51.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:51.590 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:51.590 + for nvme in "${!nvme_files[@]}" 00:01:51.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:51.590 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:51.590 + for nvme in "${!nvme_files[@]}" 00:01:51.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:51.590 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:51.849 + for nvme in "${!nvme_files[@]}" 00:01:51.849 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:51.849 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:51.849 + for nvme in "${!nvme_files[@]}" 00:01:51.849 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:51.849 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:51.849 + for nvme in "${!nvme_files[@]}" 00:01:51.849 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:51.849 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:51.849 + for nvme in "${!nvme_files[@]}" 00:01:51.849 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:51.849 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:51.849 + for nvme in "${!nvme_files[@]}" 00:01:51.849 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:51.849 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:51.849 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:51.849 + echo 'End stage prepare_nvme.sh' 00:01:51.849 End stage prepare_nvme.sh 00:01:51.859 [Pipeline] sh 00:01:52.171 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:52.171 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:52.171 00:01:52.171 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:52.171 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:52.171 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:52.171 HELP=0 00:01:52.171 DRY_RUN=0 00:01:52.171 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:52.171 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:52.171 NVME_AUTO_CREATE=0 00:01:52.171 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:52.171 NVME_CMB=,,,, 00:01:52.171 NVME_PMR=,,,, 00:01:52.171 NVME_ZNS=,,,, 00:01:52.171 NVME_MS=true,,,, 00:01:52.171 NVME_FDP=,,,on, 00:01:52.171 SPDK_VAGRANT_DISTRO=fedora38 00:01:52.171 SPDK_VAGRANT_VMCPU=10 00:01:52.171 SPDK_VAGRANT_VMRAM=12288 00:01:52.171 SPDK_VAGRANT_PROVIDER=libvirt 00:01:52.171 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:52.171 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:52.171 SPDK_OPENSTACK_NETWORK=0 00:01:52.171 VAGRANT_PACKAGE_BOX=0 00:01:52.171 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:52.171 FORCE_DISTRO=true 00:01:52.171 VAGRANT_BOX_VERSION= 00:01:52.171 EXTRA_VAGRANTFILES= 00:01:52.171 NIC_MODEL=e1000 00:01:52.171 00:01:52.171 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:52.171 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:55.449 Bringing machine 'default' up with 'libvirt' provider... 00:01:56.016 ==> default: Creating image (snapshot of base box volume). 00:01:56.275 ==> default: Creating domain with the following settings... 00:01:56.276 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721671747_b20335b2868e75a9bff7 00:01:56.276 ==> default: -- Domain type: kvm 00:01:56.276 ==> default: -- Cpus: 10 00:01:56.276 ==> default: -- Feature: acpi 00:01:56.276 ==> default: -- Feature: apic 00:01:56.276 ==> default: -- Feature: pae 00:01:56.276 ==> default: -- Memory: 12288M 00:01:56.276 ==> default: -- Memory Backing: hugepages: 00:01:56.276 ==> default: -- Management MAC: 00:01:56.276 ==> default: -- Loader: 00:01:56.276 ==> default: -- Nvram: 00:01:56.276 ==> default: -- Base box: spdk/fedora38 00:01:56.276 ==> default: -- Storage pool: default 00:01:56.276 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721671747_b20335b2868e75a9bff7.img (20G) 00:01:56.276 ==> default: -- Volume Cache: default 00:01:56.276 ==> default: -- Kernel: 00:01:56.276 ==> default: -- Initrd: 00:01:56.276 ==> default: -- Graphics Type: vnc 00:01:56.276 ==> default: -- Graphics Port: -1 00:01:56.276 ==> default: -- Graphics IP: 127.0.0.1 00:01:56.276 ==> default: -- Graphics Password: Not defined 00:01:56.276 ==> default: -- Video Type: cirrus 00:01:56.276 ==> default: -- Video VRAM: 9216 00:01:56.276 ==> default: -- Sound Type: 00:01:56.276 ==> default: -- Keymap: en-us 00:01:56.276 ==> default: -- TPM Path: 00:01:56.276 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:56.276 ==> default: -- Command line args: 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:56.276 ==> default: -> value=-drive, 00:01:56.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:56.276 ==> default: -> value=-drive, 00:01:56.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:56.276 ==> default: -> value=-drive, 00:01:56.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.276 ==> default: -> value=-drive, 00:01:56.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.276 ==> default: -> value=-drive, 00:01:56.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:56.276 ==> default: -> value=-drive, 00:01:56.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:56.276 ==> default: -> value=-device, 00:01:56.276 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.534 ==> default: Creating shared folders metadata... 00:01:56.534 ==> default: Starting domain. 00:01:59.098 ==> default: Waiting for domain to get an IP address... 00:02:17.183 ==> default: Waiting for SSH to become available... 00:02:17.183 ==> default: Configuring and enabling network interfaces... 00:02:20.467 default: SSH address: 192.168.121.205:22 00:02:20.467 default: SSH username: vagrant 00:02:20.467 default: SSH auth method: private key 00:02:23.079 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:31.207 ==> default: Mounting SSHFS shared folder... 00:02:32.148 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:32.148 ==> default: Checking Mount.. 00:02:33.524 ==> default: Folder Successfully Mounted! 00:02:33.524 ==> default: Running provisioner: file... 00:02:34.091 default: ~/.gitconfig => .gitconfig 00:02:34.658 00:02:34.658 SUCCESS! 00:02:34.658 00:02:34.658 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:34.658 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:34.658 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:34.658 00:02:34.668 [Pipeline] } 00:02:34.687 [Pipeline] // stage 00:02:34.697 [Pipeline] dir 00:02:34.698 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:34.700 [Pipeline] { 00:02:34.713 [Pipeline] catchError 00:02:34.715 [Pipeline] { 00:02:34.729 [Pipeline] sh 00:02:35.008 + vagrant ssh-config --host vagrant 00:02:35.008 + sed -ne /^Host/,$p 00:02:35.008 + tee ssh_conf 00:02:38.430 Host vagrant 00:02:38.430 HostName 192.168.121.205 00:02:38.430 User vagrant 00:02:38.430 Port 22 00:02:38.430 UserKnownHostsFile /dev/null 00:02:38.430 StrictHostKeyChecking no 00:02:38.430 PasswordAuthentication no 00:02:38.430 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:38.430 IdentitiesOnly yes 00:02:38.430 LogLevel FATAL 00:02:38.430 ForwardAgent yes 00:02:38.430 ForwardX11 yes 00:02:38.430 00:02:38.442 [Pipeline] withEnv 00:02:38.443 [Pipeline] { 00:02:38.456 [Pipeline] sh 00:02:38.736 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:38.736 source /etc/os-release 00:02:38.736 [[ -e /image.version ]] && img=$(< /image.version) 00:02:38.736 # Minimal, systemd-like check. 00:02:38.736 if [[ -e /.dockerenv ]]; then 00:02:38.736 # Clear garbage from the node's name: 00:02:38.736 # agt-er_autotest_547-896 -> autotest_547-896 00:02:38.736 # $HOSTNAME is the actual container id 00:02:38.736 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:38.736 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:38.736 # We can assume this is a mount from a host where container is running, 00:02:38.736 # so fetch its hostname to easily identify the target swarm worker. 00:02:38.736 container="$(< /etc/hostname) ($agent)" 00:02:38.736 else 00:02:38.736 # Fallback 00:02:38.736 container=$agent 00:02:38.736 fi 00:02:38.736 fi 00:02:38.736 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:38.736 00:02:39.006 [Pipeline] } 00:02:39.025 [Pipeline] // withEnv 00:02:39.033 [Pipeline] setCustomBuildProperty 00:02:39.049 [Pipeline] stage 00:02:39.051 [Pipeline] { (Tests) 00:02:39.069 [Pipeline] sh 00:02:39.347 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:39.616 [Pipeline] sh 00:02:39.894 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:39.909 [Pipeline] timeout 00:02:39.909 Timeout set to expire in 40 min 00:02:39.911 [Pipeline] { 00:02:39.926 [Pipeline] sh 00:02:40.202 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:40.770 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:02:40.818 [Pipeline] sh 00:02:41.109 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:41.401 [Pipeline] sh 00:02:41.706 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:41.725 [Pipeline] sh 00:02:42.005 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:42.005 ++ readlink -f spdk_repo 00:02:42.005 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:42.005 + [[ -n /home/vagrant/spdk_repo ]] 00:02:42.005 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:42.005 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:42.005 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:42.005 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:42.264 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:42.264 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:42.264 + cd /home/vagrant/spdk_repo 00:02:42.264 + source /etc/os-release 00:02:42.264 ++ NAME='Fedora Linux' 00:02:42.264 ++ VERSION='38 (Cloud Edition)' 00:02:42.264 ++ ID=fedora 00:02:42.264 ++ VERSION_ID=38 00:02:42.264 ++ VERSION_CODENAME= 00:02:42.264 ++ PLATFORM_ID=platform:f38 00:02:42.264 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:42.264 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:42.264 ++ LOGO=fedora-logo-icon 00:02:42.264 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:42.264 ++ HOME_URL=https://fedoraproject.org/ 00:02:42.264 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:42.264 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:42.264 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:42.264 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:42.264 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:42.264 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:42.264 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:42.264 ++ SUPPORT_END=2024-05-14 00:02:42.264 ++ VARIANT='Cloud Edition' 00:02:42.264 ++ VARIANT_ID=cloud 00:02:42.264 + uname -a 00:02:42.264 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:42.264 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:42.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:42.781 Hugepages 00:02:42.781 node hugesize free / total 00:02:42.781 node0 1048576kB 0 / 0 00:02:42.781 node0 2048kB 0 / 0 00:02:42.781 00:02:42.781 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:42.781 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:42.781 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:43.040 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:43.040 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:43.040 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:43.040 + rm -f /tmp/spdk-ld-path 00:02:43.040 + source autorun-spdk.conf 00:02:43.040 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.040 ++ SPDK_TEST_NVME=1 00:02:43.040 ++ SPDK_TEST_FTL=1 00:02:43.040 ++ SPDK_TEST_ISAL=1 00:02:43.040 ++ SPDK_RUN_ASAN=1 00:02:43.040 ++ SPDK_RUN_UBSAN=1 00:02:43.040 ++ SPDK_TEST_XNVME=1 00:02:43.040 ++ SPDK_TEST_NVME_FDP=1 00:02:43.040 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.040 ++ RUN_NIGHTLY=1 00:02:43.040 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:43.040 + [[ -n '' ]] 00:02:43.040 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:43.040 + for M in /var/spdk/build-*-manifest.txt 00:02:43.040 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:43.040 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.040 + for M in /var/spdk/build-*-manifest.txt 00:02:43.040 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:43.040 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.040 ++ uname 00:02:43.040 + [[ Linux == \L\i\n\u\x ]] 00:02:43.040 + sudo dmesg -T 00:02:43.040 + sudo dmesg --clear 00:02:43.040 + dmesg_pid=5145 00:02:43.040 + sudo dmesg -Tw 00:02:43.040 + [[ Fedora Linux == FreeBSD ]] 00:02:43.040 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.040 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.040 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:43.040 + [[ -x /usr/src/fio-static/fio ]] 00:02:43.040 + export FIO_BIN=/usr/src/fio-static/fio 00:02:43.040 + FIO_BIN=/usr/src/fio-static/fio 00:02:43.040 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:43.040 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:43.040 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:43.040 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.040 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.040 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:43.041 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.041 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.041 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.041 Test configuration: 00:02:43.041 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.041 SPDK_TEST_NVME=1 00:02:43.041 SPDK_TEST_FTL=1 00:02:43.041 SPDK_TEST_ISAL=1 00:02:43.041 SPDK_RUN_ASAN=1 00:02:43.041 SPDK_RUN_UBSAN=1 00:02:43.041 SPDK_TEST_XNVME=1 00:02:43.041 SPDK_TEST_NVME_FDP=1 00:02:43.041 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.041 RUN_NIGHTLY=1 18:09:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:43.041 18:09:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:43.041 18:09:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.041 18:09:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.041 18:09:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.041 18:09:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.041 18:09:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.041 18:09:55 -- paths/export.sh@5 -- $ export PATH 00:02:43.041 18:09:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.041 18:09:55 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:43.041 18:09:55 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:43.299 18:09:55 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721671795.XXXXXX 00:02:43.299 18:09:55 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721671795.ozNsa0 00:02:43.299 18:09:55 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:43.299 18:09:55 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:43.299 18:09:55 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:43.299 18:09:55 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:43.299 18:09:55 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:43.299 18:09:55 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:43.299 18:09:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:43.299 18:09:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.299 18:09:55 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:43.299 18:09:55 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:43.299 18:09:55 -- pm/common@17 -- $ local monitor 00:02:43.299 18:09:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.299 18:09:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.299 18:09:55 -- pm/common@25 -- $ sleep 1 00:02:43.299 18:09:55 -- pm/common@21 -- $ date +%s 00:02:43.299 18:09:55 -- pm/common@21 -- $ date +%s 00:02:43.299 18:09:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721671795 00:02:43.299 18:09:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721671795 00:02:43.299 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721671795_collect-vmstat.pm.log 00:02:43.299 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721671795_collect-cpu-load.pm.log 00:02:44.235 18:09:56 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:44.235 18:09:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:44.235 18:09:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:44.235 18:09:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:44.235 18:09:56 -- spdk/autobuild.sh@16 -- $ date -u 00:02:44.235 Mon Jul 22 06:09:56 PM UTC 2024 00:02:44.235 18:09:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:44.235 v24.09-pre-297-gf7b31b2b9 00:02:44.235 18:09:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:44.235 18:09:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:44.235 18:09:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:44.235 18:09:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:44.235 18:09:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.235 ************************************ 00:02:44.235 START TEST asan 00:02:44.235 ************************************ 00:02:44.235 using asan 00:02:44.235 18:09:56 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:44.235 00:02:44.235 real 0m0.000s 00:02:44.235 user 0m0.000s 00:02:44.235 sys 0m0.000s 00:02:44.235 18:09:56 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:44.235 18:09:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.236 ************************************ 00:02:44.236 END TEST asan 00:02:44.236 ************************************ 00:02:44.236 18:09:56 -- common/autotest_common.sh@1142 -- $ return 0 00:02:44.236 18:09:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:44.236 18:09:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:44.236 18:09:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:44.236 18:09:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:44.236 18:09:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.236 ************************************ 00:02:44.236 START TEST ubsan 00:02:44.236 ************************************ 00:02:44.236 using ubsan 00:02:44.236 18:09:56 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:44.236 00:02:44.236 real 0m0.000s 00:02:44.236 user 0m0.000s 00:02:44.236 sys 0m0.000s 00:02:44.236 18:09:56 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:44.236 ************************************ 00:02:44.236 18:09:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.236 END TEST ubsan 00:02:44.236 ************************************ 00:02:44.236 18:09:56 -- common/autotest_common.sh@1142 -- $ return 0 00:02:44.236 18:09:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:44.236 18:09:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.236 18:09:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.236 18:09:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.236 18:09:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.236 18:09:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.236 18:09:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.236 18:09:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.236 18:09:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:44.493 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:44.494 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.060 Using 'verbs' RDMA provider 00:03:00.946 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:13.190 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:13.190 Creating mk/config.mk...done. 00:03:13.190 Creating mk/cc.flags.mk...done. 00:03:13.190 Type 'make' to build. 00:03:13.190 18:10:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:13.190 18:10:23 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:13.190 18:10:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:13.190 18:10:23 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.190 ************************************ 00:03:13.190 START TEST make 00:03:13.190 ************************************ 00:03:13.190 18:10:23 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:13.190 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:13.190 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:13.190 meson setup builddir \ 00:03:13.190 -Dwith-libaio=enabled \ 00:03:13.190 -Dwith-liburing=enabled \ 00:03:13.190 -Dwith-libvfn=disabled \ 00:03:13.190 -Dwith-spdk=false && \ 00:03:13.190 meson compile -C builddir && \ 00:03:13.190 cd -) 00:03:13.190 make[1]: Nothing to be done for 'all'. 00:03:15.088 The Meson build system 00:03:15.088 Version: 1.3.1 00:03:15.088 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:15.088 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:15.088 Build type: native build 00:03:15.088 Project name: xnvme 00:03:15.088 Project version: 0.7.3 00:03:15.088 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:15.088 C linker for the host machine: cc ld.bfd 2.39-16 00:03:15.088 Host machine cpu family: x86_64 00:03:15.088 Host machine cpu: x86_64 00:03:15.088 Message: host_machine.system: linux 00:03:15.088 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:15.088 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:15.088 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:15.088 Run-time dependency threads found: YES 00:03:15.088 Has header "setupapi.h" : NO 00:03:15.088 Has header "linux/blkzoned.h" : YES 00:03:15.088 Has header "linux/blkzoned.h" : YES (cached) 00:03:15.088 Has header "libaio.h" : YES 00:03:15.088 Library aio found: YES 00:03:15.088 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:15.088 Run-time dependency liburing found: YES 2.2 00:03:15.088 Dependency libvfn skipped: feature with-libvfn disabled 00:03:15.088 Run-time dependency appleframeworks found: NO (tried framework) 00:03:15.088 Run-time dependency appleframeworks found: NO (tried framework) 00:03:15.088 Configuring xnvme_config.h using configuration 00:03:15.088 Configuring xnvme.spec using configuration 00:03:15.088 Run-time dependency bash-completion found: YES 2.11 00:03:15.088 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:15.088 Program cp found: YES (/usr/bin/cp) 00:03:15.088 Has header "winsock2.h" : NO 00:03:15.088 Has header "dbghelp.h" : NO 00:03:15.088 Library rpcrt4 found: NO 00:03:15.088 Library rt found: YES 00:03:15.088 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:15.088 Found CMake: /usr/bin/cmake (3.27.7) 00:03:15.088 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:15.088 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:15.088 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:15.088 Build targets in project: 32 00:03:15.088 00:03:15.088 xnvme 0.7.3 00:03:15.088 00:03:15.088 User defined options 00:03:15.088 with-libaio : enabled 00:03:15.088 with-liburing: enabled 00:03:15.088 with-libvfn : disabled 00:03:15.088 with-spdk : false 00:03:15.088 00:03:15.088 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:15.346 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:15.346 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:15.346 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:15.346 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:15.346 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:15.346 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:15.346 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:15.346 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:15.346 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:15.603 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:15.603 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:15.603 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:15.603 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:15.603 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:15.603 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:15.603 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:15.603 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:15.603 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:15.603 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:15.603 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:15.603 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:15.603 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:15.603 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:15.603 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:15.603 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:15.861 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:15.861 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:15.861 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:15.861 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:15.861 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:15.861 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:15.861 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:15.861 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:15.861 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:15.861 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:15.861 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:15.861 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:15.861 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:15.861 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:15.861 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:15.861 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:15.861 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:15.861 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:15.861 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:15.861 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:15.861 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:15.861 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:15.861 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:15.861 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:15.861 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:15.861 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:15.861 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:16.119 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:16.119 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:16.119 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:16.119 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:16.119 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:16.119 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:16.119 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:16.119 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:16.119 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:16.119 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:16.119 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:16.119 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:16.119 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:16.119 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:16.119 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:16.119 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:16.377 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:16.377 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:16.377 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:16.377 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:16.377 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:16.377 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:16.377 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:16.377 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:16.377 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:16.377 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:16.377 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:16.377 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:16.377 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:16.377 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:16.377 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:16.377 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:16.634 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:16.634 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:16.634 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:16.634 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:16.634 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:16.634 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:16.634 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:16.634 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:16.634 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:16.634 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:16.634 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:16.634 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:16.634 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:16.634 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:16.634 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:16.634 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:16.634 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:16.634 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:16.892 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:16.892 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:16.892 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:16.892 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:16.892 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:16.892 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:16.892 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:16.892 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:16.892 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:16.892 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:16.892 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:16.892 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:16.892 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:16.892 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:16.892 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:16.892 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:16.892 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:16.892 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:16.892 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:16.892 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:16.892 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:16.892 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:16.892 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:17.149 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:17.149 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:17.149 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:17.149 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:17.149 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:17.149 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:17.149 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:17.149 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:17.149 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:17.149 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:17.149 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:17.149 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:17.149 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:17.149 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:17.149 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:17.149 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:17.149 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:17.407 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:17.407 [143/203] Linking target lib/libxnvme.so 00:03:17.407 [144/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:17.407 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:17.407 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:17.407 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:17.407 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:17.407 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:17.407 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:17.407 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:17.407 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:17.407 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:17.407 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:17.407 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:17.665 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:17.665 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:17.665 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:17.665 [159/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:17.665 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:17.665 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:17.665 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:17.665 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:17.665 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:17.665 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:17.665 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:17.923 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:17.923 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:17.923 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:17.923 [170/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:17.923 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:17.923 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:18.181 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:18.181 [174/203] Linking static target lib/libxnvme.a 00:03:18.181 [175/203] Linking target tests/xnvme_tests_ioworker 00:03:18.181 [176/203] Linking target tests/xnvme_tests_scc 00:03:18.181 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:18.181 [178/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:18.181 [179/203] Linking target tests/xnvme_tests_enum 00:03:18.181 [180/203] Linking target tests/xnvme_tests_cli 00:03:18.181 [181/203] Linking target tests/xnvme_tests_xnvme_file 00:03:18.181 [182/203] Linking target tests/xnvme_tests_buf 00:03:18.181 [183/203] Linking target tests/xnvme_tests_lblk 00:03:18.181 [184/203] Linking target tests/xnvme_tests_znd_append 00:03:18.181 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:18.181 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:18.439 [187/203] Linking target tools/lblk 00:03:18.439 [188/203] Linking target tests/xnvme_tests_znd_state 00:03:18.439 [189/203] Linking target tests/xnvme_tests_kvs 00:03:18.439 [190/203] Linking target tools/kvs 00:03:18.439 [191/203] Linking target tools/xnvme_file 00:03:18.439 [192/203] Linking target tools/zoned 00:03:18.439 [193/203] Linking target examples/xnvme_dev 00:03:18.439 [194/203] Linking target examples/xnvme_hello 00:03:18.439 [195/203] Linking target tests/xnvme_tests_map 00:03:18.439 [196/203] Linking target tools/xdd 00:03:18.439 [197/203] Linking target examples/xnvme_io_async 00:03:18.439 [198/203] Linking target tools/xnvme 00:03:18.439 [199/203] Linking target examples/xnvme_single_sync 00:03:18.439 [200/203] Linking target examples/zoned_io_sync 00:03:18.439 [201/203] Linking target examples/xnvme_enum 00:03:18.439 [202/203] Linking target examples/zoned_io_async 00:03:18.439 [203/203] Linking target examples/xnvme_single_async 00:03:18.439 INFO: autodetecting backend as ninja 00:03:18.439 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:18.439 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:26.554 The Meson build system 00:03:26.554 Version: 1.3.1 00:03:26.554 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:26.554 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:26.554 Build type: native build 00:03:26.554 Program cat found: YES (/usr/bin/cat) 00:03:26.554 Project name: DPDK 00:03:26.554 Project version: 24.03.0 00:03:26.554 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:26.554 C linker for the host machine: cc ld.bfd 2.39-16 00:03:26.554 Host machine cpu family: x86_64 00:03:26.554 Host machine cpu: x86_64 00:03:26.554 Message: ## Building in Developer Mode ## 00:03:26.554 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:26.554 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:26.554 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:26.554 Program python3 found: YES (/usr/bin/python3) 00:03:26.554 Program cat found: YES (/usr/bin/cat) 00:03:26.554 Compiler for C supports arguments -march=native: YES 00:03:26.554 Checking for size of "void *" : 8 00:03:26.554 Checking for size of "void *" : 8 (cached) 00:03:26.554 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:26.554 Library m found: YES 00:03:26.554 Library numa found: YES 00:03:26.554 Has header "numaif.h" : YES 00:03:26.554 Library fdt found: NO 00:03:26.554 Library execinfo found: NO 00:03:26.554 Has header "execinfo.h" : YES 00:03:26.554 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:26.554 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:26.554 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:26.554 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:26.554 Run-time dependency openssl found: YES 3.0.9 00:03:26.554 Run-time dependency libpcap found: YES 1.10.4 00:03:26.554 Has header "pcap.h" with dependency libpcap: YES 00:03:26.554 Compiler for C supports arguments -Wcast-qual: YES 00:03:26.554 Compiler for C supports arguments -Wdeprecated: YES 00:03:26.554 Compiler for C supports arguments -Wformat: YES 00:03:26.554 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:26.554 Compiler for C supports arguments -Wformat-security: NO 00:03:26.554 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:26.554 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:26.554 Compiler for C supports arguments -Wnested-externs: YES 00:03:26.554 Compiler for C supports arguments -Wold-style-definition: YES 00:03:26.554 Compiler for C supports arguments -Wpointer-arith: YES 00:03:26.554 Compiler for C supports arguments -Wsign-compare: YES 00:03:26.554 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:26.554 Compiler for C supports arguments -Wundef: YES 00:03:26.554 Compiler for C supports arguments -Wwrite-strings: YES 00:03:26.554 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:26.554 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:26.554 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:26.554 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:26.554 Program objdump found: YES (/usr/bin/objdump) 00:03:26.554 Compiler for C supports arguments -mavx512f: YES 00:03:26.554 Checking if "AVX512 checking" compiles: YES 00:03:26.554 Fetching value of define "__SSE4_2__" : 1 00:03:26.554 Fetching value of define "__AES__" : 1 00:03:26.554 Fetching value of define "__AVX__" : 1 00:03:26.554 Fetching value of define "__AVX2__" : 1 00:03:26.554 Fetching value of define "__AVX512BW__" : (undefined) 00:03:26.554 Fetching value of define "__AVX512CD__" : (undefined) 00:03:26.554 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:26.554 Fetching value of define "__AVX512F__" : (undefined) 00:03:26.554 Fetching value of define "__AVX512VL__" : (undefined) 00:03:26.554 Fetching value of define "__PCLMUL__" : 1 00:03:26.554 Fetching value of define "__RDRND__" : 1 00:03:26.554 Fetching value of define "__RDSEED__" : 1 00:03:26.554 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:26.554 Fetching value of define "__znver1__" : (undefined) 00:03:26.554 Fetching value of define "__znver2__" : (undefined) 00:03:26.554 Fetching value of define "__znver3__" : (undefined) 00:03:26.554 Fetching value of define "__znver4__" : (undefined) 00:03:26.554 Library asan found: YES 00:03:26.554 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:26.554 Message: lib/log: Defining dependency "log" 00:03:26.554 Message: lib/kvargs: Defining dependency "kvargs" 00:03:26.554 Message: lib/telemetry: Defining dependency "telemetry" 00:03:26.554 Library rt found: YES 00:03:26.554 Checking for function "getentropy" : NO 00:03:26.554 Message: lib/eal: Defining dependency "eal" 00:03:26.554 Message: lib/ring: Defining dependency "ring" 00:03:26.554 Message: lib/rcu: Defining dependency "rcu" 00:03:26.554 Message: lib/mempool: Defining dependency "mempool" 00:03:26.554 Message: lib/mbuf: Defining dependency "mbuf" 00:03:26.554 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:26.554 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:26.554 Compiler for C supports arguments -mpclmul: YES 00:03:26.554 Compiler for C supports arguments -maes: YES 00:03:26.554 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:26.554 Compiler for C supports arguments -mavx512bw: YES 00:03:26.554 Compiler for C supports arguments -mavx512dq: YES 00:03:26.554 Compiler for C supports arguments -mavx512vl: YES 00:03:26.554 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:26.554 Compiler for C supports arguments -mavx2: YES 00:03:26.554 Compiler for C supports arguments -mavx: YES 00:03:26.554 Message: lib/net: Defining dependency "net" 00:03:26.554 Message: lib/meter: Defining dependency "meter" 00:03:26.554 Message: lib/ethdev: Defining dependency "ethdev" 00:03:26.554 Message: lib/pci: Defining dependency "pci" 00:03:26.554 Message: lib/cmdline: Defining dependency "cmdline" 00:03:26.554 Message: lib/hash: Defining dependency "hash" 00:03:26.554 Message: lib/timer: Defining dependency "timer" 00:03:26.554 Message: lib/compressdev: Defining dependency "compressdev" 00:03:26.555 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:26.555 Message: lib/dmadev: Defining dependency "dmadev" 00:03:26.555 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:26.555 Message: lib/power: Defining dependency "power" 00:03:26.555 Message: lib/reorder: Defining dependency "reorder" 00:03:26.555 Message: lib/security: Defining dependency "security" 00:03:26.555 Has header "linux/userfaultfd.h" : YES 00:03:26.555 Has header "linux/vduse.h" : YES 00:03:26.555 Message: lib/vhost: Defining dependency "vhost" 00:03:26.555 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:26.555 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:26.555 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:26.555 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:26.555 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:26.555 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:26.555 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:26.555 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:26.555 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:26.555 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:26.555 Program doxygen found: YES (/usr/bin/doxygen) 00:03:26.555 Configuring doxy-api-html.conf using configuration 00:03:26.555 Configuring doxy-api-man.conf using configuration 00:03:26.555 Program mandb found: YES (/usr/bin/mandb) 00:03:26.555 Program sphinx-build found: NO 00:03:26.555 Configuring rte_build_config.h using configuration 00:03:26.555 Message: 00:03:26.555 ================= 00:03:26.555 Applications Enabled 00:03:26.555 ================= 00:03:26.555 00:03:26.555 apps: 00:03:26.555 00:03:26.555 00:03:26.555 Message: 00:03:26.555 ================= 00:03:26.555 Libraries Enabled 00:03:26.555 ================= 00:03:26.555 00:03:26.555 libs: 00:03:26.555 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:26.555 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:26.555 cryptodev, dmadev, power, reorder, security, vhost, 00:03:26.555 00:03:26.555 Message: 00:03:26.555 =============== 00:03:26.555 Drivers Enabled 00:03:26.555 =============== 00:03:26.555 00:03:26.555 common: 00:03:26.555 00:03:26.555 bus: 00:03:26.555 pci, vdev, 00:03:26.555 mempool: 00:03:26.555 ring, 00:03:26.555 dma: 00:03:26.555 00:03:26.555 net: 00:03:26.555 00:03:26.555 crypto: 00:03:26.555 00:03:26.555 compress: 00:03:26.555 00:03:26.555 vdpa: 00:03:26.555 00:03:26.555 00:03:26.555 Message: 00:03:26.555 ================= 00:03:26.555 Content Skipped 00:03:26.555 ================= 00:03:26.555 00:03:26.555 apps: 00:03:26.555 dumpcap: explicitly disabled via build config 00:03:26.555 graph: explicitly disabled via build config 00:03:26.555 pdump: explicitly disabled via build config 00:03:26.555 proc-info: explicitly disabled via build config 00:03:26.555 test-acl: explicitly disabled via build config 00:03:26.555 test-bbdev: explicitly disabled via build config 00:03:26.555 test-cmdline: explicitly disabled via build config 00:03:26.555 test-compress-perf: explicitly disabled via build config 00:03:26.555 test-crypto-perf: explicitly disabled via build config 00:03:26.555 test-dma-perf: explicitly disabled via build config 00:03:26.555 test-eventdev: explicitly disabled via build config 00:03:26.555 test-fib: explicitly disabled via build config 00:03:26.555 test-flow-perf: explicitly disabled via build config 00:03:26.555 test-gpudev: explicitly disabled via build config 00:03:26.555 test-mldev: explicitly disabled via build config 00:03:26.555 test-pipeline: explicitly disabled via build config 00:03:26.555 test-pmd: explicitly disabled via build config 00:03:26.555 test-regex: explicitly disabled via build config 00:03:26.555 test-sad: explicitly disabled via build config 00:03:26.555 test-security-perf: explicitly disabled via build config 00:03:26.555 00:03:26.555 libs: 00:03:26.555 argparse: explicitly disabled via build config 00:03:26.555 metrics: explicitly disabled via build config 00:03:26.555 acl: explicitly disabled via build config 00:03:26.555 bbdev: explicitly disabled via build config 00:03:26.555 bitratestats: explicitly disabled via build config 00:03:26.555 bpf: explicitly disabled via build config 00:03:26.555 cfgfile: explicitly disabled via build config 00:03:26.555 distributor: explicitly disabled via build config 00:03:26.555 efd: explicitly disabled via build config 00:03:26.555 eventdev: explicitly disabled via build config 00:03:26.555 dispatcher: explicitly disabled via build config 00:03:26.555 gpudev: explicitly disabled via build config 00:03:26.555 gro: explicitly disabled via build config 00:03:26.555 gso: explicitly disabled via build config 00:03:26.555 ip_frag: explicitly disabled via build config 00:03:26.555 jobstats: explicitly disabled via build config 00:03:26.555 latencystats: explicitly disabled via build config 00:03:26.555 lpm: explicitly disabled via build config 00:03:26.555 member: explicitly disabled via build config 00:03:26.555 pcapng: explicitly disabled via build config 00:03:26.555 rawdev: explicitly disabled via build config 00:03:26.555 regexdev: explicitly disabled via build config 00:03:26.555 mldev: explicitly disabled via build config 00:03:26.555 rib: explicitly disabled via build config 00:03:26.555 sched: explicitly disabled via build config 00:03:26.555 stack: explicitly disabled via build config 00:03:26.555 ipsec: explicitly disabled via build config 00:03:26.555 pdcp: explicitly disabled via build config 00:03:26.555 fib: explicitly disabled via build config 00:03:26.555 port: explicitly disabled via build config 00:03:26.555 pdump: explicitly disabled via build config 00:03:26.555 table: explicitly disabled via build config 00:03:26.555 pipeline: explicitly disabled via build config 00:03:26.555 graph: explicitly disabled via build config 00:03:26.555 node: explicitly disabled via build config 00:03:26.555 00:03:26.555 drivers: 00:03:26.555 common/cpt: not in enabled drivers build config 00:03:26.555 common/dpaax: not in enabled drivers build config 00:03:26.555 common/iavf: not in enabled drivers build config 00:03:26.555 common/idpf: not in enabled drivers build config 00:03:26.555 common/ionic: not in enabled drivers build config 00:03:26.555 common/mvep: not in enabled drivers build config 00:03:26.555 common/octeontx: not in enabled drivers build config 00:03:26.555 bus/auxiliary: not in enabled drivers build config 00:03:26.555 bus/cdx: not in enabled drivers build config 00:03:26.555 bus/dpaa: not in enabled drivers build config 00:03:26.555 bus/fslmc: not in enabled drivers build config 00:03:26.555 bus/ifpga: not in enabled drivers build config 00:03:26.555 bus/platform: not in enabled drivers build config 00:03:26.555 bus/uacce: not in enabled drivers build config 00:03:26.555 bus/vmbus: not in enabled drivers build config 00:03:26.555 common/cnxk: not in enabled drivers build config 00:03:26.555 common/mlx5: not in enabled drivers build config 00:03:26.555 common/nfp: not in enabled drivers build config 00:03:26.555 common/nitrox: not in enabled drivers build config 00:03:26.555 common/qat: not in enabled drivers build config 00:03:26.555 common/sfc_efx: not in enabled drivers build config 00:03:26.555 mempool/bucket: not in enabled drivers build config 00:03:26.555 mempool/cnxk: not in enabled drivers build config 00:03:26.555 mempool/dpaa: not in enabled drivers build config 00:03:26.555 mempool/dpaa2: not in enabled drivers build config 00:03:26.555 mempool/octeontx: not in enabled drivers build config 00:03:26.555 mempool/stack: not in enabled drivers build config 00:03:26.555 dma/cnxk: not in enabled drivers build config 00:03:26.555 dma/dpaa: not in enabled drivers build config 00:03:26.555 dma/dpaa2: not in enabled drivers build config 00:03:26.555 dma/hisilicon: not in enabled drivers build config 00:03:26.555 dma/idxd: not in enabled drivers build config 00:03:26.555 dma/ioat: not in enabled drivers build config 00:03:26.555 dma/skeleton: not in enabled drivers build config 00:03:26.555 net/af_packet: not in enabled drivers build config 00:03:26.555 net/af_xdp: not in enabled drivers build config 00:03:26.555 net/ark: not in enabled drivers build config 00:03:26.555 net/atlantic: not in enabled drivers build config 00:03:26.555 net/avp: not in enabled drivers build config 00:03:26.555 net/axgbe: not in enabled drivers build config 00:03:26.555 net/bnx2x: not in enabled drivers build config 00:03:26.555 net/bnxt: not in enabled drivers build config 00:03:26.555 net/bonding: not in enabled drivers build config 00:03:26.555 net/cnxk: not in enabled drivers build config 00:03:26.555 net/cpfl: not in enabled drivers build config 00:03:26.555 net/cxgbe: not in enabled drivers build config 00:03:26.555 net/dpaa: not in enabled drivers build config 00:03:26.555 net/dpaa2: not in enabled drivers build config 00:03:26.555 net/e1000: not in enabled drivers build config 00:03:26.555 net/ena: not in enabled drivers build config 00:03:26.555 net/enetc: not in enabled drivers build config 00:03:26.555 net/enetfec: not in enabled drivers build config 00:03:26.555 net/enic: not in enabled drivers build config 00:03:26.555 net/failsafe: not in enabled drivers build config 00:03:26.555 net/fm10k: not in enabled drivers build config 00:03:26.555 net/gve: not in enabled drivers build config 00:03:26.555 net/hinic: not in enabled drivers build config 00:03:26.555 net/hns3: not in enabled drivers build config 00:03:26.555 net/i40e: not in enabled drivers build config 00:03:26.555 net/iavf: not in enabled drivers build config 00:03:26.555 net/ice: not in enabled drivers build config 00:03:26.555 net/idpf: not in enabled drivers build config 00:03:26.555 net/igc: not in enabled drivers build config 00:03:26.555 net/ionic: not in enabled drivers build config 00:03:26.555 net/ipn3ke: not in enabled drivers build config 00:03:26.555 net/ixgbe: not in enabled drivers build config 00:03:26.555 net/mana: not in enabled drivers build config 00:03:26.555 net/memif: not in enabled drivers build config 00:03:26.555 net/mlx4: not in enabled drivers build config 00:03:26.556 net/mlx5: not in enabled drivers build config 00:03:26.556 net/mvneta: not in enabled drivers build config 00:03:26.556 net/mvpp2: not in enabled drivers build config 00:03:26.556 net/netvsc: not in enabled drivers build config 00:03:26.556 net/nfb: not in enabled drivers build config 00:03:26.556 net/nfp: not in enabled drivers build config 00:03:26.556 net/ngbe: not in enabled drivers build config 00:03:26.556 net/null: not in enabled drivers build config 00:03:26.556 net/octeontx: not in enabled drivers build config 00:03:26.556 net/octeon_ep: not in enabled drivers build config 00:03:26.556 net/pcap: not in enabled drivers build config 00:03:26.556 net/pfe: not in enabled drivers build config 00:03:26.556 net/qede: not in enabled drivers build config 00:03:26.556 net/ring: not in enabled drivers build config 00:03:26.556 net/sfc: not in enabled drivers build config 00:03:26.556 net/softnic: not in enabled drivers build config 00:03:26.556 net/tap: not in enabled drivers build config 00:03:26.556 net/thunderx: not in enabled drivers build config 00:03:26.556 net/txgbe: not in enabled drivers build config 00:03:26.556 net/vdev_netvsc: not in enabled drivers build config 00:03:26.556 net/vhost: not in enabled drivers build config 00:03:26.556 net/virtio: not in enabled drivers build config 00:03:26.556 net/vmxnet3: not in enabled drivers build config 00:03:26.556 raw/*: missing internal dependency, "rawdev" 00:03:26.556 crypto/armv8: not in enabled drivers build config 00:03:26.556 crypto/bcmfs: not in enabled drivers build config 00:03:26.556 crypto/caam_jr: not in enabled drivers build config 00:03:26.556 crypto/ccp: not in enabled drivers build config 00:03:26.556 crypto/cnxk: not in enabled drivers build config 00:03:26.556 crypto/dpaa_sec: not in enabled drivers build config 00:03:26.556 crypto/dpaa2_sec: not in enabled drivers build config 00:03:26.556 crypto/ipsec_mb: not in enabled drivers build config 00:03:26.556 crypto/mlx5: not in enabled drivers build config 00:03:26.556 crypto/mvsam: not in enabled drivers build config 00:03:26.556 crypto/nitrox: not in enabled drivers build config 00:03:26.556 crypto/null: not in enabled drivers build config 00:03:26.556 crypto/octeontx: not in enabled drivers build config 00:03:26.556 crypto/openssl: not in enabled drivers build config 00:03:26.556 crypto/scheduler: not in enabled drivers build config 00:03:26.556 crypto/uadk: not in enabled drivers build config 00:03:26.556 crypto/virtio: not in enabled drivers build config 00:03:26.556 compress/isal: not in enabled drivers build config 00:03:26.556 compress/mlx5: not in enabled drivers build config 00:03:26.556 compress/nitrox: not in enabled drivers build config 00:03:26.556 compress/octeontx: not in enabled drivers build config 00:03:26.556 compress/zlib: not in enabled drivers build config 00:03:26.556 regex/*: missing internal dependency, "regexdev" 00:03:26.556 ml/*: missing internal dependency, "mldev" 00:03:26.556 vdpa/ifc: not in enabled drivers build config 00:03:26.556 vdpa/mlx5: not in enabled drivers build config 00:03:26.556 vdpa/nfp: not in enabled drivers build config 00:03:26.556 vdpa/sfc: not in enabled drivers build config 00:03:26.556 event/*: missing internal dependency, "eventdev" 00:03:26.556 baseband/*: missing internal dependency, "bbdev" 00:03:26.556 gpu/*: missing internal dependency, "gpudev" 00:03:26.556 00:03:26.556 00:03:26.556 Build targets in project: 85 00:03:26.556 00:03:26.556 DPDK 24.03.0 00:03:26.556 00:03:26.556 User defined options 00:03:26.556 buildtype : debug 00:03:26.556 default_library : shared 00:03:26.556 libdir : lib 00:03:26.556 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:26.556 b_sanitize : address 00:03:26.556 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:26.556 c_link_args : 00:03:26.556 cpu_instruction_set: native 00:03:26.556 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:26.556 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:26.556 enable_docs : false 00:03:26.556 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:26.556 enable_kmods : false 00:03:26.556 max_lcores : 128 00:03:26.556 tests : false 00:03:26.556 00:03:26.556 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:26.556 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:26.556 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:26.556 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:26.556 [3/268] Linking static target lib/librte_kvargs.a 00:03:26.556 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:26.556 [5/268] Linking static target lib/librte_log.a 00:03:26.556 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:26.814 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.814 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:26.814 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:26.814 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:26.814 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:27.072 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:27.072 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:27.072 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:27.330 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.330 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:27.330 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:27.330 [18/268] Linking static target lib/librte_telemetry.a 00:03:27.330 [19/268] Linking target lib/librte_log.so.24.1 00:03:27.330 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:27.587 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:27.587 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:27.845 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:27.845 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:27.845 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:27.845 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:28.103 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:28.103 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:28.103 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:28.103 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:28.103 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:28.103 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:28.103 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.361 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:28.618 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:28.618 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:28.618 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:28.876 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:28.876 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:28.876 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:28.876 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:28.876 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:28.876 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:28.876 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:29.134 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:29.134 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:29.392 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:29.392 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:29.649 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:29.649 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:29.649 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:29.907 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:29.907 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:29.907 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:29.907 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:29.907 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:30.164 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:30.164 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:30.422 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:30.422 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:30.422 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:30.679 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:30.679 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:30.679 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:30.937 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:30.937 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:30.937 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:31.195 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:31.195 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:31.452 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:31.452 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:31.452 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:31.713 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:31.713 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:31.713 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:31.713 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:31.713 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:31.971 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:32.229 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:32.229 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:32.229 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:32.487 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:32.487 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:32.767 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:32.767 [85/268] Linking static target lib/librte_eal.a 00:03:32.767 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:32.767 [87/268] Linking static target lib/librte_ring.a 00:03:32.767 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:33.024 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:33.282 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:33.282 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:33.282 [92/268] Linking static target lib/librte_mempool.a 00:03:33.282 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:33.282 [94/268] Linking static target lib/librte_rcu.a 00:03:33.282 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.539 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:33.539 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:33.539 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:33.796 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:33.796 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:33.796 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.053 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:34.053 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:34.053 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:34.053 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:34.053 [106/268] Linking static target lib/librte_mbuf.a 00:03:34.053 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:34.309 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:34.309 [109/268] Linking static target lib/librte_net.a 00:03:34.309 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:34.309 [111/268] Linking static target lib/librte_meter.a 00:03:34.566 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.566 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:34.823 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:34.823 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.823 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.823 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:35.388 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.388 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:35.646 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:35.646 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:35.904 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:35.904 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:36.162 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:36.162 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:36.420 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:36.420 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:36.420 [128/268] Linking static target lib/librte_pci.a 00:03:36.420 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:36.420 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:36.420 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:36.420 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:36.678 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:36.678 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:36.678 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:36.678 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.678 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:36.936 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:36.936 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:36.936 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:36.936 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:36.936 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:36.936 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:36.936 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:36.936 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:37.195 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:37.195 [147/268] Linking static target lib/librte_cmdline.a 00:03:37.195 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:37.761 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:37.761 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:37.761 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:38.022 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:38.022 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:38.022 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:38.023 [155/268] Linking static target lib/librte_timer.a 00:03:38.023 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:38.285 [157/268] Linking static target lib/librte_ethdev.a 00:03:38.667 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:38.667 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:38.667 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.667 [161/268] Linking static target lib/librte_compressdev.a 00:03:38.667 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:38.667 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:38.925 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.925 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:38.925 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:38.925 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:38.925 [168/268] Linking static target lib/librte_hash.a 00:03:38.925 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:39.183 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:39.183 [171/268] Linking static target lib/librte_dmadev.a 00:03:39.440 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:39.440 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:39.440 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.440 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:39.697 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:39.954 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.954 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:39.954 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:39.954 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:39.954 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.954 [182/268] Linking static target lib/librte_cryptodev.a 00:03:39.954 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:39.954 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:40.212 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:40.212 [186/268] Linking static target lib/librte_power.a 00:03:40.470 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:40.470 [188/268] Linking static target lib/librte_reorder.a 00:03:40.728 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:40.728 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:40.728 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:40.728 [192/268] Linking static target lib/librte_security.a 00:03:40.987 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:40.987 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.245 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:41.245 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.503 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.503 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:41.503 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:41.760 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:41.760 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:41.760 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:41.760 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.018 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:42.018 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:42.018 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:42.276 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:42.276 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:42.276 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:42.534 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:42.534 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:42.534 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:42.534 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.534 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.534 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:42.534 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:42.792 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.792 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.792 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:42.792 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:42.792 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:42.792 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.792 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:43.050 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:43.050 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:43.050 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:43.051 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.308 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.567 [229/268] Linking target lib/librte_eal.so.24.1 00:03:43.567 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:43.567 [231/268] Linking target lib/librte_meter.so.24.1 00:03:43.567 [232/268] Linking target lib/librte_pci.so.24.1 00:03:43.567 [233/268] Linking target lib/librte_ring.so.24.1 00:03:43.567 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:43.567 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:43.567 [236/268] Linking target lib/librte_timer.so.24.1 00:03:43.825 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:43.825 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:43.825 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:43.825 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:43.825 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:43.825 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:43.825 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:43.825 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:44.084 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:44.084 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:44.084 [247/268] Linking target lib/librte_mbuf.so.24.1 00:03:44.084 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:44.084 [249/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:44.342 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:44.342 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:44.342 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:03:44.342 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:44.342 [254/268] Linking target lib/librte_net.so.24.1 00:03:44.342 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:44.600 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:44.600 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:44.600 [258/268] Linking target lib/librte_security.so.24.1 00:03:44.600 [259/268] Linking target lib/librte_hash.so.24.1 00:03:44.600 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:45.569 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.569 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:45.569 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:45.569 [264/268] Linking target lib/librte_power.so.24.1 00:03:48.114 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:48.373 [266/268] Linking static target lib/librte_vhost.a 00:03:49.750 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.007 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:50.007 INFO: autodetecting backend as ninja 00:03:50.007 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:50.944 CC lib/ut_mock/mock.o 00:03:50.944 CC lib/log/log_flags.o 00:03:50.944 CC lib/log/log.o 00:03:50.944 CC lib/log/log_deprecated.o 00:03:50.944 CC lib/ut/ut.o 00:03:51.202 LIB libspdk_ut_mock.a 00:03:51.202 LIB libspdk_ut.a 00:03:51.202 SO libspdk_ut_mock.so.6.0 00:03:51.202 SO libspdk_ut.so.2.0 00:03:51.202 LIB libspdk_log.a 00:03:51.466 SYMLINK libspdk_ut.so 00:03:51.466 SO libspdk_log.so.7.0 00:03:51.466 SYMLINK libspdk_ut_mock.so 00:03:51.466 SYMLINK libspdk_log.so 00:03:51.729 CC lib/util/base64.o 00:03:51.729 CC lib/util/bit_array.o 00:03:51.729 CC lib/util/cpuset.o 00:03:51.729 CXX lib/trace_parser/trace.o 00:03:51.729 CC lib/util/crc16.o 00:03:51.729 CC lib/util/crc32.o 00:03:51.729 CC lib/ioat/ioat.o 00:03:51.729 CC lib/util/crc32c.o 00:03:51.729 CC lib/dma/dma.o 00:03:51.729 CC lib/util/crc32_ieee.o 00:03:51.729 CC lib/vfio_user/host/vfio_user_pci.o 00:03:51.729 CC lib/util/crc64.o 00:03:51.986 LIB libspdk_dma.a 00:03:51.986 CC lib/util/dif.o 00:03:51.986 CC lib/util/fd.o 00:03:51.986 SO libspdk_dma.so.4.0 00:03:51.986 CC lib/util/fd_group.o 00:03:51.986 CC lib/util/file.o 00:03:51.986 SYMLINK libspdk_dma.so 00:03:51.986 CC lib/util/hexlify.o 00:03:51.986 CC lib/util/iov.o 00:03:51.986 CC lib/vfio_user/host/vfio_user.o 00:03:51.986 CC lib/util/math.o 00:03:51.986 LIB libspdk_ioat.a 00:03:51.986 SO libspdk_ioat.so.7.0 00:03:52.244 CC lib/util/net.o 00:03:52.244 CC lib/util/pipe.o 00:03:52.244 SYMLINK libspdk_ioat.so 00:03:52.244 CC lib/util/strerror_tls.o 00:03:52.244 CC lib/util/string.o 00:03:52.244 CC lib/util/uuid.o 00:03:52.244 CC lib/util/xor.o 00:03:52.244 CC lib/util/zipf.o 00:03:52.244 LIB libspdk_vfio_user.a 00:03:52.244 SO libspdk_vfio_user.so.5.0 00:03:52.244 SYMLINK libspdk_vfio_user.so 00:03:52.538 LIB libspdk_util.a 00:03:52.796 SO libspdk_util.so.10.0 00:03:52.796 LIB libspdk_trace_parser.a 00:03:52.796 SO libspdk_trace_parser.so.5.0 00:03:53.053 SYMLINK libspdk_util.so 00:03:53.053 SYMLINK libspdk_trace_parser.so 00:03:53.053 CC lib/rdma_provider/common.o 00:03:53.053 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:53.053 CC lib/json/json_util.o 00:03:53.053 CC lib/json/json_parse.o 00:03:53.053 CC lib/json/json_write.o 00:03:53.053 CC lib/env_dpdk/env.o 00:03:53.053 CC lib/rdma_utils/rdma_utils.o 00:03:53.053 CC lib/vmd/vmd.o 00:03:53.053 CC lib/conf/conf.o 00:03:53.053 CC lib/idxd/idxd.o 00:03:53.312 CC lib/env_dpdk/memory.o 00:03:53.312 LIB libspdk_rdma_provider.a 00:03:53.312 SO libspdk_rdma_provider.so.6.0 00:03:53.570 LIB libspdk_conf.a 00:03:53.570 CC lib/idxd/idxd_user.o 00:03:53.570 CC lib/idxd/idxd_kernel.o 00:03:53.570 SO libspdk_conf.so.6.0 00:03:53.570 SYMLINK libspdk_rdma_provider.so 00:03:53.570 CC lib/env_dpdk/pci.o 00:03:53.570 LIB libspdk_rdma_utils.a 00:03:53.570 SO libspdk_rdma_utils.so.1.0 00:03:53.570 SYMLINK libspdk_conf.so 00:03:53.570 CC lib/env_dpdk/init.o 00:03:53.570 SYMLINK libspdk_rdma_utils.so 00:03:53.570 CC lib/vmd/led.o 00:03:53.570 CC lib/env_dpdk/threads.o 00:03:53.828 LIB libspdk_json.a 00:03:53.828 CC lib/env_dpdk/pci_ioat.o 00:03:53.828 CC lib/env_dpdk/pci_virtio.o 00:03:53.828 SO libspdk_json.so.6.0 00:03:53.828 CC lib/env_dpdk/pci_vmd.o 00:03:53.828 CC lib/env_dpdk/pci_idxd.o 00:03:53.828 CC lib/env_dpdk/pci_event.o 00:03:53.828 CC lib/env_dpdk/sigbus_handler.o 00:03:53.828 SYMLINK libspdk_json.so 00:03:54.085 CC lib/env_dpdk/pci_dpdk.o 00:03:54.085 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.085 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.085 LIB libspdk_vmd.a 00:03:54.085 LIB libspdk_idxd.a 00:03:54.085 SO libspdk_vmd.so.6.0 00:03:54.085 SO libspdk_idxd.so.12.0 00:03:54.085 CC lib/jsonrpc/jsonrpc_server.o 00:03:54.085 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:54.085 CC lib/jsonrpc/jsonrpc_client.o 00:03:54.085 SYMLINK libspdk_vmd.so 00:03:54.085 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.343 SYMLINK libspdk_idxd.so 00:03:54.603 LIB libspdk_jsonrpc.a 00:03:54.603 SO libspdk_jsonrpc.so.6.0 00:03:54.603 SYMLINK libspdk_jsonrpc.so 00:03:54.889 CC lib/rpc/rpc.o 00:03:55.147 LIB libspdk_env_dpdk.a 00:03:55.147 LIB libspdk_rpc.a 00:03:55.147 SO libspdk_rpc.so.6.0 00:03:55.147 SO libspdk_env_dpdk.so.15.0 00:03:55.147 SYMLINK libspdk_rpc.so 00:03:55.406 SYMLINK libspdk_env_dpdk.so 00:03:55.406 CC lib/trace/trace.o 00:03:55.406 CC lib/trace/trace_flags.o 00:03:55.406 CC lib/trace/trace_rpc.o 00:03:55.406 CC lib/notify/notify.o 00:03:55.406 CC lib/notify/notify_rpc.o 00:03:55.406 CC lib/keyring/keyring.o 00:03:55.406 CC lib/keyring/keyring_rpc.o 00:03:55.665 LIB libspdk_notify.a 00:03:55.665 SO libspdk_notify.so.6.0 00:03:55.665 LIB libspdk_trace.a 00:03:55.665 LIB libspdk_keyring.a 00:03:55.923 SO libspdk_trace.so.10.0 00:03:55.923 SO libspdk_keyring.so.1.0 00:03:55.923 SYMLINK libspdk_notify.so 00:03:55.923 SYMLINK libspdk_keyring.so 00:03:55.923 SYMLINK libspdk_trace.so 00:03:56.182 CC lib/thread/thread.o 00:03:56.182 CC lib/thread/iobuf.o 00:03:56.182 CC lib/sock/sock.o 00:03:56.182 CC lib/sock/sock_rpc.o 00:03:56.749 LIB libspdk_sock.a 00:03:56.749 SO libspdk_sock.so.10.0 00:03:56.749 SYMLINK libspdk_sock.so 00:03:57.008 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:57.008 CC lib/nvme/nvme_ctrlr.o 00:03:57.008 CC lib/nvme/nvme_fabric.o 00:03:57.008 CC lib/nvme/nvme_ns_cmd.o 00:03:57.008 CC lib/nvme/nvme_pcie_common.o 00:03:57.008 CC lib/nvme/nvme_qpair.o 00:03:57.008 CC lib/nvme/nvme_ns.o 00:03:57.008 CC lib/nvme/nvme_pcie.o 00:03:57.008 CC lib/nvme/nvme.o 00:03:57.943 CC lib/nvme/nvme_quirks.o 00:03:57.943 CC lib/nvme/nvme_transport.o 00:03:57.943 CC lib/nvme/nvme_discovery.o 00:03:57.943 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:58.202 LIB libspdk_thread.a 00:03:58.202 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:58.202 CC lib/nvme/nvme_tcp.o 00:03:58.202 SO libspdk_thread.so.10.1 00:03:58.202 CC lib/nvme/nvme_opal.o 00:03:58.202 SYMLINK libspdk_thread.so 00:03:58.202 CC lib/nvme/nvme_io_msg.o 00:03:58.461 CC lib/nvme/nvme_poll_group.o 00:03:58.461 CC lib/nvme/nvme_zns.o 00:03:58.720 CC lib/nvme/nvme_stubs.o 00:03:58.720 CC lib/accel/accel.o 00:03:58.720 CC lib/nvme/nvme_auth.o 00:03:58.979 CC lib/nvme/nvme_cuse.o 00:03:58.979 CC lib/blob/blobstore.o 00:03:58.979 CC lib/init/json_config.o 00:03:58.979 CC lib/accel/accel_rpc.o 00:03:59.238 CC lib/accel/accel_sw.o 00:03:59.238 CC lib/nvme/nvme_rdma.o 00:03:59.238 CC lib/init/subsystem.o 00:03:59.497 CC lib/virtio/virtio.o 00:03:59.497 CC lib/init/subsystem_rpc.o 00:03:59.497 CC lib/blob/request.o 00:03:59.757 CC lib/init/rpc.o 00:03:59.757 CC lib/virtio/virtio_vhost_user.o 00:03:59.757 CC lib/virtio/virtio_vfio_user.o 00:03:59.757 LIB libspdk_init.a 00:04:00.015 CC lib/blob/zeroes.o 00:04:00.016 SO libspdk_init.so.5.0 00:04:00.016 CC lib/virtio/virtio_pci.o 00:04:00.016 SYMLINK libspdk_init.so 00:04:00.016 CC lib/blob/blob_bs_dev.o 00:04:00.016 LIB libspdk_accel.a 00:04:00.016 SO libspdk_accel.so.16.0 00:04:00.274 SYMLINK libspdk_accel.so 00:04:00.274 CC lib/event/app.o 00:04:00.274 CC lib/event/reactor.o 00:04:00.274 CC lib/event/app_rpc.o 00:04:00.274 CC lib/event/log_rpc.o 00:04:00.274 CC lib/event/scheduler_static.o 00:04:00.274 LIB libspdk_virtio.a 00:04:00.274 SO libspdk_virtio.so.7.0 00:04:00.274 CC lib/bdev/bdev.o 00:04:00.274 CC lib/bdev/bdev_rpc.o 00:04:00.274 CC lib/bdev/bdev_zone.o 00:04:00.531 CC lib/bdev/part.o 00:04:00.531 SYMLINK libspdk_virtio.so 00:04:00.531 CC lib/bdev/scsi_nvme.o 00:04:00.789 LIB libspdk_event.a 00:04:00.789 SO libspdk_event.so.14.0 00:04:01.048 SYMLINK libspdk_event.so 00:04:01.048 LIB libspdk_nvme.a 00:04:01.320 SO libspdk_nvme.so.13.1 00:04:01.604 SYMLINK libspdk_nvme.so 00:04:03.507 LIB libspdk_blob.a 00:04:03.804 SO libspdk_blob.so.11.0 00:04:03.804 SYMLINK libspdk_blob.so 00:04:04.085 LIB libspdk_bdev.a 00:04:04.085 SO libspdk_bdev.so.16.0 00:04:04.085 CC lib/blobfs/blobfs.o 00:04:04.085 CC lib/lvol/lvol.o 00:04:04.085 CC lib/blobfs/tree.o 00:04:04.085 SYMLINK libspdk_bdev.so 00:04:04.344 CC lib/scsi/dev.o 00:04:04.344 CC lib/scsi/port.o 00:04:04.344 CC lib/scsi/lun.o 00:04:04.344 CC lib/ftl/ftl_core.o 00:04:04.344 CC lib/ublk/ublk.o 00:04:04.344 CC lib/nvmf/ctrlr.o 00:04:04.344 CC lib/nbd/nbd.o 00:04:04.344 CC lib/scsi/scsi.o 00:04:04.602 CC lib/nbd/nbd_rpc.o 00:04:04.602 CC lib/ftl/ftl_init.o 00:04:04.602 CC lib/scsi/scsi_bdev.o 00:04:04.860 CC lib/ublk/ublk_rpc.o 00:04:04.860 CC lib/nvmf/ctrlr_discovery.o 00:04:04.860 CC lib/nvmf/ctrlr_bdev.o 00:04:04.860 LIB libspdk_nbd.a 00:04:04.860 SO libspdk_nbd.so.7.0 00:04:04.860 SYMLINK libspdk_nbd.so 00:04:05.118 CC lib/ftl/ftl_layout.o 00:04:05.118 CC lib/ftl/ftl_debug.o 00:04:05.118 CC lib/ftl/ftl_io.o 00:04:05.118 LIB libspdk_ublk.a 00:04:05.118 SO libspdk_ublk.so.3.0 00:04:05.118 LIB libspdk_blobfs.a 00:04:05.377 SO libspdk_blobfs.so.10.0 00:04:05.377 SYMLINK libspdk_ublk.so 00:04:05.377 CC lib/ftl/ftl_sb.o 00:04:05.377 CC lib/ftl/ftl_l2p.o 00:04:05.377 CC lib/ftl/ftl_l2p_flat.o 00:04:05.377 SYMLINK libspdk_blobfs.so 00:04:05.377 CC lib/ftl/ftl_nv_cache.o 00:04:05.377 CC lib/scsi/scsi_pr.o 00:04:05.377 CC lib/ftl/ftl_band.o 00:04:05.377 LIB libspdk_lvol.a 00:04:05.377 CC lib/ftl/ftl_band_ops.o 00:04:05.377 SO libspdk_lvol.so.10.0 00:04:05.636 SYMLINK libspdk_lvol.so 00:04:05.636 CC lib/ftl/ftl_writer.o 00:04:05.636 CC lib/ftl/ftl_rq.o 00:04:05.636 CC lib/ftl/ftl_reloc.o 00:04:05.636 CC lib/ftl/ftl_l2p_cache.o 00:04:05.636 CC lib/ftl/ftl_p2l.o 00:04:05.636 CC lib/scsi/scsi_rpc.o 00:04:05.636 CC lib/scsi/task.o 00:04:05.893 CC lib/ftl/mngt/ftl_mngt.o 00:04:05.894 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:05.894 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:05.894 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:05.894 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:05.894 LIB libspdk_scsi.a 00:04:06.158 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:06.158 SO libspdk_scsi.so.9.0 00:04:06.158 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:06.158 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:06.158 CC lib/nvmf/subsystem.o 00:04:06.158 SYMLINK libspdk_scsi.so 00:04:06.158 CC lib/nvmf/nvmf.o 00:04:06.430 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:06.430 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:06.430 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:06.430 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:06.430 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:06.430 CC lib/iscsi/conn.o 00:04:06.689 CC lib/ftl/utils/ftl_conf.o 00:04:06.689 CC lib/ftl/utils/ftl_md.o 00:04:06.689 CC lib/ftl/utils/ftl_mempool.o 00:04:06.689 CC lib/ftl/utils/ftl_bitmap.o 00:04:06.689 CC lib/nvmf/nvmf_rpc.o 00:04:06.689 CC lib/vhost/vhost.o 00:04:06.689 CC lib/vhost/vhost_rpc.o 00:04:06.946 CC lib/vhost/vhost_scsi.o 00:04:06.946 CC lib/nvmf/transport.o 00:04:06.946 CC lib/ftl/utils/ftl_property.o 00:04:07.204 CC lib/iscsi/init_grp.o 00:04:07.204 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:07.204 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:07.463 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:07.463 CC lib/iscsi/iscsi.o 00:04:07.463 CC lib/iscsi/md5.o 00:04:07.463 CC lib/iscsi/param.o 00:04:07.463 CC lib/vhost/vhost_blk.o 00:04:07.463 CC lib/vhost/rte_vhost_user.o 00:04:07.720 CC lib/iscsi/portal_grp.o 00:04:07.720 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:07.720 CC lib/iscsi/tgt_node.o 00:04:07.720 CC lib/nvmf/tcp.o 00:04:07.720 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:07.977 CC lib/iscsi/iscsi_subsystem.o 00:04:07.977 CC lib/iscsi/iscsi_rpc.o 00:04:07.977 CC lib/iscsi/task.o 00:04:07.977 CC lib/nvmf/stubs.o 00:04:07.977 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:08.235 CC lib/nvmf/mdns_server.o 00:04:08.235 CC lib/nvmf/rdma.o 00:04:08.235 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:08.492 CC lib/nvmf/auth.o 00:04:08.492 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:08.492 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:08.492 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:08.749 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:08.749 CC lib/ftl/base/ftl_base_dev.o 00:04:08.749 CC lib/ftl/base/ftl_base_bdev.o 00:04:08.749 CC lib/ftl/ftl_trace.o 00:04:09.007 LIB libspdk_vhost.a 00:04:09.007 SO libspdk_vhost.so.8.0 00:04:09.007 LIB libspdk_ftl.a 00:04:09.266 SYMLINK libspdk_vhost.so 00:04:09.266 LIB libspdk_iscsi.a 00:04:09.266 SO libspdk_ftl.so.9.0 00:04:09.266 SO libspdk_iscsi.so.8.0 00:04:09.524 SYMLINK libspdk_iscsi.so 00:04:09.782 SYMLINK libspdk_ftl.so 00:04:11.157 LIB libspdk_nvmf.a 00:04:11.157 SO libspdk_nvmf.so.19.0 00:04:11.723 SYMLINK libspdk_nvmf.so 00:04:11.982 CC module/env_dpdk/env_dpdk_rpc.o 00:04:11.982 CC module/sock/posix/posix.o 00:04:11.982 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:11.982 CC module/scheduler/gscheduler/gscheduler.o 00:04:11.982 CC module/accel/error/accel_error.o 00:04:11.982 CC module/accel/ioat/accel_ioat.o 00:04:11.982 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:11.982 CC module/keyring/linux/keyring.o 00:04:11.982 CC module/blob/bdev/blob_bdev.o 00:04:11.982 CC module/keyring/file/keyring.o 00:04:12.240 LIB libspdk_env_dpdk_rpc.a 00:04:12.240 SO libspdk_env_dpdk_rpc.so.6.0 00:04:12.240 SYMLINK libspdk_env_dpdk_rpc.so 00:04:12.240 CC module/keyring/file/keyring_rpc.o 00:04:12.240 CC module/keyring/linux/keyring_rpc.o 00:04:12.240 LIB libspdk_scheduler_gscheduler.a 00:04:12.240 CC module/accel/ioat/accel_ioat_rpc.o 00:04:12.240 LIB libspdk_scheduler_dpdk_governor.a 00:04:12.240 SO libspdk_scheduler_gscheduler.so.4.0 00:04:12.240 LIB libspdk_scheduler_dynamic.a 00:04:12.240 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:12.240 CC module/accel/error/accel_error_rpc.o 00:04:12.240 SO libspdk_scheduler_dynamic.so.4.0 00:04:12.240 SYMLINK libspdk_scheduler_gscheduler.so 00:04:12.240 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:12.240 LIB libspdk_keyring_linux.a 00:04:12.498 LIB libspdk_keyring_file.a 00:04:12.498 SYMLINK libspdk_scheduler_dynamic.so 00:04:12.498 LIB libspdk_blob_bdev.a 00:04:12.498 LIB libspdk_accel_ioat.a 00:04:12.498 SO libspdk_keyring_linux.so.1.0 00:04:12.498 SO libspdk_keyring_file.so.1.0 00:04:12.498 SO libspdk_blob_bdev.so.11.0 00:04:12.498 SO libspdk_accel_ioat.so.6.0 00:04:12.498 SYMLINK libspdk_keyring_file.so 00:04:12.498 SYMLINK libspdk_keyring_linux.so 00:04:12.498 LIB libspdk_accel_error.a 00:04:12.498 SYMLINK libspdk_blob_bdev.so 00:04:12.498 SYMLINK libspdk_accel_ioat.so 00:04:12.498 CC module/accel/dsa/accel_dsa.o 00:04:12.498 CC module/accel/dsa/accel_dsa_rpc.o 00:04:12.498 SO libspdk_accel_error.so.2.0 00:04:12.498 CC module/accel/iaa/accel_iaa.o 00:04:12.498 CC module/accel/iaa/accel_iaa_rpc.o 00:04:12.498 SYMLINK libspdk_accel_error.so 00:04:12.758 CC module/bdev/error/vbdev_error.o 00:04:12.758 CC module/bdev/delay/vbdev_delay.o 00:04:12.758 CC module/bdev/gpt/gpt.o 00:04:12.758 CC module/bdev/lvol/vbdev_lvol.o 00:04:12.758 CC module/blobfs/bdev/blobfs_bdev.o 00:04:12.758 LIB libspdk_accel_dsa.a 00:04:12.758 LIB libspdk_accel_iaa.a 00:04:12.758 SO libspdk_accel_dsa.so.5.0 00:04:12.758 SO libspdk_accel_iaa.so.3.0 00:04:12.758 CC module/bdev/malloc/bdev_malloc.o 00:04:13.016 CC module/bdev/null/bdev_null.o 00:04:13.016 SYMLINK libspdk_accel_iaa.so 00:04:13.016 CC module/bdev/gpt/vbdev_gpt.o 00:04:13.016 SYMLINK libspdk_accel_dsa.so 00:04:13.016 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:13.016 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:13.016 LIB libspdk_sock_posix.a 00:04:13.016 SO libspdk_sock_posix.so.6.0 00:04:13.016 CC module/bdev/error/vbdev_error_rpc.o 00:04:13.274 SYMLINK libspdk_sock_posix.so 00:04:13.274 CC module/bdev/nvme/bdev_nvme.o 00:04:13.274 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:13.274 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:13.274 CC module/bdev/nvme/nvme_rpc.o 00:04:13.274 LIB libspdk_blobfs_bdev.a 00:04:13.274 LIB libspdk_bdev_error.a 00:04:13.274 CC module/bdev/null/bdev_null_rpc.o 00:04:13.274 SO libspdk_blobfs_bdev.so.6.0 00:04:13.274 LIB libspdk_bdev_gpt.a 00:04:13.274 SO libspdk_bdev_error.so.6.0 00:04:13.274 SO libspdk_bdev_gpt.so.6.0 00:04:13.274 LIB libspdk_bdev_malloc.a 00:04:13.274 SYMLINK libspdk_bdev_error.so 00:04:13.274 SYMLINK libspdk_blobfs_bdev.so 00:04:13.274 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:13.274 SO libspdk_bdev_malloc.so.6.0 00:04:13.274 LIB libspdk_bdev_delay.a 00:04:13.274 SYMLINK libspdk_bdev_gpt.so 00:04:13.532 SO libspdk_bdev_delay.so.6.0 00:04:13.532 SYMLINK libspdk_bdev_malloc.so 00:04:13.532 CC module/bdev/nvme/bdev_mdns_client.o 00:04:13.532 CC module/bdev/nvme/vbdev_opal.o 00:04:13.532 LIB libspdk_bdev_null.a 00:04:13.532 SYMLINK libspdk_bdev_delay.so 00:04:13.532 CC module/bdev/passthru/vbdev_passthru.o 00:04:13.532 SO libspdk_bdev_null.so.6.0 00:04:13.532 CC module/bdev/raid/bdev_raid.o 00:04:13.532 CC module/bdev/split/vbdev_split.o 00:04:13.532 CC module/bdev/raid/bdev_raid_rpc.o 00:04:13.532 SYMLINK libspdk_bdev_null.so 00:04:13.791 CC module/bdev/raid/bdev_raid_sb.o 00:04:13.791 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:13.791 CC module/bdev/raid/raid0.o 00:04:13.791 LIB libspdk_bdev_lvol.a 00:04:13.791 SO libspdk_bdev_lvol.so.6.0 00:04:13.791 CC module/bdev/split/vbdev_split_rpc.o 00:04:13.791 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:13.791 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:14.049 SYMLINK libspdk_bdev_lvol.so 00:04:14.049 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:14.049 CC module/bdev/raid/raid1.o 00:04:14.049 LIB libspdk_bdev_split.a 00:04:14.049 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:14.049 LIB libspdk_bdev_passthru.a 00:04:14.049 SO libspdk_bdev_split.so.6.0 00:04:14.049 CC module/bdev/xnvme/bdev_xnvme.o 00:04:14.049 CC module/bdev/raid/concat.o 00:04:14.049 LIB libspdk_bdev_zone_block.a 00:04:14.049 SO libspdk_bdev_passthru.so.6.0 00:04:14.049 SO libspdk_bdev_zone_block.so.6.0 00:04:14.307 SYMLINK libspdk_bdev_split.so 00:04:14.307 SYMLINK libspdk_bdev_passthru.so 00:04:14.307 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:14.307 SYMLINK libspdk_bdev_zone_block.so 00:04:14.307 CC module/bdev/aio/bdev_aio.o 00:04:14.307 CC module/bdev/aio/bdev_aio_rpc.o 00:04:14.565 LIB libspdk_bdev_xnvme.a 00:04:14.565 CC module/bdev/iscsi/bdev_iscsi.o 00:04:14.565 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:14.565 CC module/bdev/ftl/bdev_ftl.o 00:04:14.565 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:14.565 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:14.565 SO libspdk_bdev_xnvme.so.3.0 00:04:14.565 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:14.565 SYMLINK libspdk_bdev_xnvme.so 00:04:14.565 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:14.822 LIB libspdk_bdev_ftl.a 00:04:14.822 LIB libspdk_bdev_aio.a 00:04:14.822 SO libspdk_bdev_ftl.so.6.0 00:04:14.822 SO libspdk_bdev_aio.so.6.0 00:04:14.822 LIB libspdk_bdev_iscsi.a 00:04:15.080 SYMLINK libspdk_bdev_aio.so 00:04:15.080 SYMLINK libspdk_bdev_ftl.so 00:04:15.080 SO libspdk_bdev_iscsi.so.6.0 00:04:15.080 LIB libspdk_bdev_raid.a 00:04:15.080 SO libspdk_bdev_raid.so.6.0 00:04:15.080 SYMLINK libspdk_bdev_iscsi.so 00:04:15.080 SYMLINK libspdk_bdev_raid.so 00:04:15.338 LIB libspdk_bdev_virtio.a 00:04:15.338 SO libspdk_bdev_virtio.so.6.0 00:04:15.338 SYMLINK libspdk_bdev_virtio.so 00:04:16.273 LIB libspdk_bdev_nvme.a 00:04:16.273 SO libspdk_bdev_nvme.so.7.0 00:04:16.531 SYMLINK libspdk_bdev_nvme.so 00:04:17.096 CC module/event/subsystems/iobuf/iobuf.o 00:04:17.096 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:17.096 CC module/event/subsystems/keyring/keyring.o 00:04:17.096 CC module/event/subsystems/sock/sock.o 00:04:17.096 CC module/event/subsystems/scheduler/scheduler.o 00:04:17.096 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:17.096 CC module/event/subsystems/vmd/vmd.o 00:04:17.096 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:17.096 LIB libspdk_event_vhost_blk.a 00:04:17.096 LIB libspdk_event_scheduler.a 00:04:17.096 LIB libspdk_event_sock.a 00:04:17.097 LIB libspdk_event_keyring.a 00:04:17.097 SO libspdk_event_vhost_blk.so.3.0 00:04:17.097 LIB libspdk_event_iobuf.a 00:04:17.097 LIB libspdk_event_vmd.a 00:04:17.097 SO libspdk_event_scheduler.so.4.0 00:04:17.097 SO libspdk_event_keyring.so.1.0 00:04:17.097 SO libspdk_event_sock.so.5.0 00:04:17.097 SO libspdk_event_iobuf.so.3.0 00:04:17.097 SO libspdk_event_vmd.so.6.0 00:04:17.355 SYMLINK libspdk_event_vhost_blk.so 00:04:17.355 SYMLINK libspdk_event_keyring.so 00:04:17.355 SYMLINK libspdk_event_scheduler.so 00:04:17.355 SYMLINK libspdk_event_sock.so 00:04:17.355 SYMLINK libspdk_event_iobuf.so 00:04:17.355 SYMLINK libspdk_event_vmd.so 00:04:17.613 CC module/event/subsystems/accel/accel.o 00:04:17.871 LIB libspdk_event_accel.a 00:04:17.871 SO libspdk_event_accel.so.6.0 00:04:17.871 SYMLINK libspdk_event_accel.so 00:04:18.130 CC module/event/subsystems/bdev/bdev.o 00:04:18.388 LIB libspdk_event_bdev.a 00:04:18.388 SO libspdk_event_bdev.so.6.0 00:04:18.388 SYMLINK libspdk_event_bdev.so 00:04:18.646 CC module/event/subsystems/ublk/ublk.o 00:04:18.646 CC module/event/subsystems/scsi/scsi.o 00:04:18.646 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:18.646 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:18.646 CC module/event/subsystems/nbd/nbd.o 00:04:18.904 LIB libspdk_event_nbd.a 00:04:18.904 LIB libspdk_event_ublk.a 00:04:18.904 LIB libspdk_event_scsi.a 00:04:18.904 SO libspdk_event_nbd.so.6.0 00:04:18.904 SO libspdk_event_scsi.so.6.0 00:04:18.904 SO libspdk_event_ublk.so.3.0 00:04:18.904 SYMLINK libspdk_event_nbd.so 00:04:18.904 LIB libspdk_event_nvmf.a 00:04:18.904 SYMLINK libspdk_event_scsi.so 00:04:18.904 SYMLINK libspdk_event_ublk.so 00:04:18.904 SO libspdk_event_nvmf.so.6.0 00:04:19.179 SYMLINK libspdk_event_nvmf.so 00:04:19.179 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:19.179 CC module/event/subsystems/iscsi/iscsi.o 00:04:19.439 LIB libspdk_event_vhost_scsi.a 00:04:19.439 SO libspdk_event_vhost_scsi.so.3.0 00:04:19.439 LIB libspdk_event_iscsi.a 00:04:19.439 SO libspdk_event_iscsi.so.6.0 00:04:19.439 SYMLINK libspdk_event_vhost_scsi.so 00:04:19.698 SYMLINK libspdk_event_iscsi.so 00:04:19.698 SO libspdk.so.6.0 00:04:19.698 SYMLINK libspdk.so 00:04:19.955 CC app/trace_record/trace_record.o 00:04:19.955 CXX app/trace/trace.o 00:04:19.955 CC app/nvmf_tgt/nvmf_main.o 00:04:19.955 CC app/iscsi_tgt/iscsi_tgt.o 00:04:19.955 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:20.213 CC examples/ioat/perf/perf.o 00:04:20.213 CC examples/util/zipf/zipf.o 00:04:20.213 CC test/thread/poller_perf/poller_perf.o 00:04:20.213 CC test/dma/test_dma/test_dma.o 00:04:20.213 CC test/app/bdev_svc/bdev_svc.o 00:04:20.213 LINK iscsi_tgt 00:04:20.213 LINK poller_perf 00:04:20.213 LINK nvmf_tgt 00:04:20.213 LINK zipf 00:04:20.470 LINK spdk_trace_record 00:04:20.470 LINK ioat_perf 00:04:20.470 LINK bdev_svc 00:04:20.470 LINK interrupt_tgt 00:04:20.470 LINK spdk_trace 00:04:20.728 TEST_HEADER include/spdk/accel.h 00:04:20.728 TEST_HEADER include/spdk/accel_module.h 00:04:20.728 TEST_HEADER include/spdk/assert.h 00:04:20.728 TEST_HEADER include/spdk/barrier.h 00:04:20.728 TEST_HEADER include/spdk/base64.h 00:04:20.728 TEST_HEADER include/spdk/bdev.h 00:04:20.728 TEST_HEADER include/spdk/bdev_module.h 00:04:20.728 TEST_HEADER include/spdk/bdev_zone.h 00:04:20.728 TEST_HEADER include/spdk/bit_array.h 00:04:20.728 TEST_HEADER include/spdk/bit_pool.h 00:04:20.728 TEST_HEADER include/spdk/blob_bdev.h 00:04:20.728 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:20.728 TEST_HEADER include/spdk/blobfs.h 00:04:20.728 TEST_HEADER include/spdk/blob.h 00:04:20.728 TEST_HEADER include/spdk/conf.h 00:04:20.728 TEST_HEADER include/spdk/config.h 00:04:20.728 TEST_HEADER include/spdk/cpuset.h 00:04:20.728 TEST_HEADER include/spdk/crc16.h 00:04:20.728 TEST_HEADER include/spdk/crc32.h 00:04:20.728 TEST_HEADER include/spdk/crc64.h 00:04:20.728 TEST_HEADER include/spdk/dif.h 00:04:20.728 TEST_HEADER include/spdk/dma.h 00:04:20.728 TEST_HEADER include/spdk/endian.h 00:04:20.728 TEST_HEADER include/spdk/env_dpdk.h 00:04:20.728 TEST_HEADER include/spdk/env.h 00:04:20.728 TEST_HEADER include/spdk/event.h 00:04:20.728 LINK test_dma 00:04:20.728 TEST_HEADER include/spdk/fd_group.h 00:04:20.728 TEST_HEADER include/spdk/fd.h 00:04:20.728 TEST_HEADER include/spdk/file.h 00:04:20.728 TEST_HEADER include/spdk/ftl.h 00:04:20.728 TEST_HEADER include/spdk/gpt_spec.h 00:04:20.728 TEST_HEADER include/spdk/hexlify.h 00:04:20.728 TEST_HEADER include/spdk/histogram_data.h 00:04:20.728 TEST_HEADER include/spdk/idxd.h 00:04:20.728 TEST_HEADER include/spdk/idxd_spec.h 00:04:20.728 TEST_HEADER include/spdk/init.h 00:04:20.728 TEST_HEADER include/spdk/ioat.h 00:04:20.728 TEST_HEADER include/spdk/ioat_spec.h 00:04:20.728 TEST_HEADER include/spdk/iscsi_spec.h 00:04:20.728 TEST_HEADER include/spdk/json.h 00:04:20.728 TEST_HEADER include/spdk/jsonrpc.h 00:04:20.728 TEST_HEADER include/spdk/keyring.h 00:04:20.728 TEST_HEADER include/spdk/keyring_module.h 00:04:20.728 TEST_HEADER include/spdk/likely.h 00:04:20.728 CC examples/ioat/verify/verify.o 00:04:20.728 TEST_HEADER include/spdk/log.h 00:04:20.728 TEST_HEADER include/spdk/lvol.h 00:04:20.728 TEST_HEADER include/spdk/memory.h 00:04:20.728 TEST_HEADER include/spdk/mmio.h 00:04:20.728 CC test/event/event_perf/event_perf.o 00:04:20.728 TEST_HEADER include/spdk/nbd.h 00:04:20.728 TEST_HEADER include/spdk/net.h 00:04:20.728 TEST_HEADER include/spdk/notify.h 00:04:20.728 TEST_HEADER include/spdk/nvme.h 00:04:20.728 TEST_HEADER include/spdk/nvme_intel.h 00:04:20.728 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:20.728 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:20.728 TEST_HEADER include/spdk/nvme_spec.h 00:04:20.728 TEST_HEADER include/spdk/nvme_zns.h 00:04:20.728 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:20.728 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:20.728 TEST_HEADER include/spdk/nvmf.h 00:04:20.728 TEST_HEADER include/spdk/nvmf_spec.h 00:04:20.728 TEST_HEADER include/spdk/nvmf_transport.h 00:04:20.728 TEST_HEADER include/spdk/opal.h 00:04:20.728 TEST_HEADER include/spdk/opal_spec.h 00:04:20.728 TEST_HEADER include/spdk/pci_ids.h 00:04:20.728 TEST_HEADER include/spdk/pipe.h 00:04:20.728 TEST_HEADER include/spdk/queue.h 00:04:20.728 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:20.728 TEST_HEADER include/spdk/reduce.h 00:04:20.728 TEST_HEADER include/spdk/rpc.h 00:04:20.728 TEST_HEADER include/spdk/scheduler.h 00:04:20.728 TEST_HEADER include/spdk/scsi.h 00:04:20.728 TEST_HEADER include/spdk/scsi_spec.h 00:04:20.728 TEST_HEADER include/spdk/sock.h 00:04:20.728 TEST_HEADER include/spdk/stdinc.h 00:04:20.728 TEST_HEADER include/spdk/string.h 00:04:20.728 TEST_HEADER include/spdk/thread.h 00:04:20.728 TEST_HEADER include/spdk/trace.h 00:04:20.728 TEST_HEADER include/spdk/trace_parser.h 00:04:20.728 CC app/spdk_tgt/spdk_tgt.o 00:04:20.728 TEST_HEADER include/spdk/tree.h 00:04:20.728 TEST_HEADER include/spdk/ublk.h 00:04:20.728 TEST_HEADER include/spdk/util.h 00:04:20.728 TEST_HEADER include/spdk/uuid.h 00:04:20.728 TEST_HEADER include/spdk/version.h 00:04:20.728 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:20.728 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:20.728 TEST_HEADER include/spdk/vhost.h 00:04:20.728 TEST_HEADER include/spdk/vmd.h 00:04:20.728 TEST_HEADER include/spdk/xor.h 00:04:20.728 CC test/env/mem_callbacks/mem_callbacks.o 00:04:20.728 TEST_HEADER include/spdk/zipf.h 00:04:20.728 CXX test/cpp_headers/accel.o 00:04:20.986 CC examples/sock/hello_world/hello_sock.o 00:04:20.986 LINK event_perf 00:04:20.986 CC examples/thread/thread/thread_ex.o 00:04:20.986 LINK verify 00:04:20.986 CXX test/cpp_headers/accel_module.o 00:04:20.986 CC examples/vmd/lsvmd/lsvmd.o 00:04:20.986 LINK spdk_tgt 00:04:21.243 CC examples/idxd/perf/perf.o 00:04:21.243 CC test/event/reactor/reactor.o 00:04:21.243 LINK lsvmd 00:04:21.243 CXX test/cpp_headers/assert.o 00:04:21.243 LINK hello_sock 00:04:21.243 LINK thread 00:04:21.243 CC app/spdk_lspci/spdk_lspci.o 00:04:21.243 LINK nvme_fuzz 00:04:21.243 LINK reactor 00:04:21.500 CXX test/cpp_headers/barrier.o 00:04:21.500 CXX test/cpp_headers/base64.o 00:04:21.500 LINK spdk_lspci 00:04:21.500 CC examples/vmd/led/led.o 00:04:21.500 LINK mem_callbacks 00:04:21.500 CC test/app/histogram_perf/histogram_perf.o 00:04:21.500 CXX test/cpp_headers/bdev.o 00:04:21.500 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:21.500 LINK idxd_perf 00:04:21.500 CXX test/cpp_headers/bdev_module.o 00:04:21.758 CC test/env/vtophys/vtophys.o 00:04:21.758 CC test/event/reactor_perf/reactor_perf.o 00:04:21.758 LINK led 00:04:21.758 LINK histogram_perf 00:04:21.758 CC app/spdk_nvme_perf/perf.o 00:04:21.758 LINK vtophys 00:04:21.758 LINK reactor_perf 00:04:21.758 CC examples/nvme/hello_world/hello_world.o 00:04:21.758 CXX test/cpp_headers/bdev_zone.o 00:04:22.017 CXX test/cpp_headers/bit_array.o 00:04:22.017 CC test/app/jsoncat/jsoncat.o 00:04:22.017 CC examples/nvme/reconnect/reconnect.o 00:04:22.017 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:22.017 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:22.017 CXX test/cpp_headers/bit_pool.o 00:04:22.017 CC test/event/app_repeat/app_repeat.o 00:04:22.275 LINK hello_world 00:04:22.275 LINK jsoncat 00:04:22.275 CC test/event/scheduler/scheduler.o 00:04:22.275 LINK env_dpdk_post_init 00:04:22.275 LINK app_repeat 00:04:22.275 CXX test/cpp_headers/blob_bdev.o 00:04:22.275 CXX test/cpp_headers/blobfs_bdev.o 00:04:22.275 CXX test/cpp_headers/blobfs.o 00:04:22.532 LINK reconnect 00:04:22.532 LINK scheduler 00:04:22.532 CXX test/cpp_headers/blob.o 00:04:22.532 CC test/env/memory/memory_ut.o 00:04:22.532 CC examples/nvme/arbitration/arbitration.o 00:04:22.532 CC examples/nvme/hotplug/hotplug.o 00:04:22.532 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:22.790 LINK nvme_manage 00:04:22.790 CXX test/cpp_headers/conf.o 00:04:22.790 CC examples/accel/perf/accel_perf.o 00:04:22.790 LINK cmb_copy 00:04:22.790 LINK spdk_nvme_perf 00:04:22.790 LINK hotplug 00:04:22.790 CXX test/cpp_headers/config.o 00:04:23.047 CC examples/nvme/abort/abort.o 00:04:23.047 CC examples/blob/hello_world/hello_blob.o 00:04:23.047 CXX test/cpp_headers/cpuset.o 00:04:23.047 LINK arbitration 00:04:23.047 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:23.047 CXX test/cpp_headers/crc16.o 00:04:23.047 CC app/spdk_nvme_discover/discovery_aer.o 00:04:23.305 CC app/spdk_nvme_identify/identify.o 00:04:23.305 CC app/spdk_top/spdk_top.o 00:04:23.305 LINK hello_blob 00:04:23.305 LINK pmr_persistence 00:04:23.305 CXX test/cpp_headers/crc32.o 00:04:23.305 LINK spdk_nvme_discover 00:04:23.305 LINK accel_perf 00:04:23.305 LINK abort 00:04:23.563 CXX test/cpp_headers/crc64.o 00:04:23.563 CC test/app/stub/stub.o 00:04:23.563 CC examples/blob/cli/blobcli.o 00:04:23.821 CC app/vhost/vhost.o 00:04:23.821 CC app/spdk_dd/spdk_dd.o 00:04:23.821 CXX test/cpp_headers/dif.o 00:04:23.821 CC app/fio/nvme/fio_plugin.o 00:04:23.821 LINK stub 00:04:23.821 LINK memory_ut 00:04:23.821 LINK iscsi_fuzz 00:04:23.821 CXX test/cpp_headers/dma.o 00:04:23.821 LINK vhost 00:04:24.079 CXX test/cpp_headers/endian.o 00:04:24.079 LINK spdk_dd 00:04:24.079 CC test/env/pci/pci_ut.o 00:04:24.338 CC examples/bdev/hello_world/hello_bdev.o 00:04:24.338 LINK blobcli 00:04:24.338 LINK spdk_nvme_identify 00:04:24.338 CXX test/cpp_headers/env_dpdk.o 00:04:24.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:24.338 CC app/fio/bdev/fio_plugin.o 00:04:24.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:24.338 LINK spdk_top 00:04:24.338 CXX test/cpp_headers/env.o 00:04:24.597 LINK spdk_nvme 00:04:24.597 LINK hello_bdev 00:04:24.597 CC test/rpc_client/rpc_client_test.o 00:04:24.597 CC examples/bdev/bdevperf/bdevperf.o 00:04:24.597 CXX test/cpp_headers/event.o 00:04:24.597 CXX test/cpp_headers/fd_group.o 00:04:24.597 LINK pci_ut 00:04:24.597 CC test/accel/dif/dif.o 00:04:24.597 CXX test/cpp_headers/fd.o 00:04:24.597 LINK rpc_client_test 00:04:24.854 CXX test/cpp_headers/file.o 00:04:24.854 CC test/blobfs/mkfs/mkfs.o 00:04:24.854 LINK spdk_bdev 00:04:24.854 LINK vhost_fuzz 00:04:24.854 CXX test/cpp_headers/ftl.o 00:04:24.854 CXX test/cpp_headers/gpt_spec.o 00:04:24.854 CXX test/cpp_headers/hexlify.o 00:04:25.111 LINK mkfs 00:04:25.111 CXX test/cpp_headers/histogram_data.o 00:04:25.111 CC test/lvol/esnap/esnap.o 00:04:25.111 CC test/nvme/aer/aer.o 00:04:25.111 CXX test/cpp_headers/idxd.o 00:04:25.111 CC test/nvme/reset/reset.o 00:04:25.111 CXX test/cpp_headers/idxd_spec.o 00:04:25.111 CC test/nvme/sgl/sgl.o 00:04:25.368 CXX test/cpp_headers/init.o 00:04:25.368 LINK dif 00:04:25.368 CC test/nvme/e2edp/nvme_dp.o 00:04:25.369 CC test/nvme/overhead/overhead.o 00:04:25.369 LINK aer 00:04:25.369 CC test/nvme/err_injection/err_injection.o 00:04:25.369 CXX test/cpp_headers/ioat.o 00:04:25.369 LINK reset 00:04:25.369 CXX test/cpp_headers/ioat_spec.o 00:04:25.626 LINK bdevperf 00:04:25.626 LINK sgl 00:04:25.626 LINK err_injection 00:04:25.626 CXX test/cpp_headers/iscsi_spec.o 00:04:25.626 LINK nvme_dp 00:04:25.626 CC test/nvme/startup/startup.o 00:04:25.626 CC test/nvme/reserve/reserve.o 00:04:25.626 CC test/nvme/simple_copy/simple_copy.o 00:04:25.626 LINK overhead 00:04:25.884 CC test/nvme/connect_stress/connect_stress.o 00:04:25.884 CXX test/cpp_headers/json.o 00:04:25.884 LINK startup 00:04:25.884 CC test/nvme/boot_partition/boot_partition.o 00:04:25.884 CC examples/nvmf/nvmf/nvmf.o 00:04:25.884 CC test/nvme/compliance/nvme_compliance.o 00:04:25.884 LINK reserve 00:04:25.884 LINK simple_copy 00:04:26.141 CC test/nvme/fused_ordering/fused_ordering.o 00:04:26.141 CXX test/cpp_headers/jsonrpc.o 00:04:26.141 LINK connect_stress 00:04:26.141 LINK boot_partition 00:04:26.141 CXX test/cpp_headers/keyring.o 00:04:26.141 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:26.141 LINK fused_ordering 00:04:26.141 CC test/nvme/fdp/fdp.o 00:04:26.399 CC test/bdev/bdevio/bdevio.o 00:04:26.399 CXX test/cpp_headers/keyring_module.o 00:04:26.399 LINK nvmf 00:04:26.399 CC test/nvme/cuse/cuse.o 00:04:26.399 LINK nvme_compliance 00:04:26.399 CXX test/cpp_headers/likely.o 00:04:26.399 CXX test/cpp_headers/log.o 00:04:26.399 LINK doorbell_aers 00:04:26.399 CXX test/cpp_headers/lvol.o 00:04:26.399 CXX test/cpp_headers/memory.o 00:04:26.658 CXX test/cpp_headers/mmio.o 00:04:26.658 CXX test/cpp_headers/nbd.o 00:04:26.658 CXX test/cpp_headers/net.o 00:04:26.658 CXX test/cpp_headers/notify.o 00:04:26.658 CXX test/cpp_headers/nvme.o 00:04:26.658 CXX test/cpp_headers/nvme_intel.o 00:04:26.658 LINK fdp 00:04:26.658 CXX test/cpp_headers/nvme_ocssd.o 00:04:26.658 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:26.917 LINK bdevio 00:04:26.917 CXX test/cpp_headers/nvme_spec.o 00:04:26.917 CXX test/cpp_headers/nvme_zns.o 00:04:26.917 CXX test/cpp_headers/nvmf_cmd.o 00:04:26.917 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:26.917 CXX test/cpp_headers/nvmf.o 00:04:26.917 CXX test/cpp_headers/nvmf_spec.o 00:04:26.917 CXX test/cpp_headers/nvmf_transport.o 00:04:26.917 CXX test/cpp_headers/opal.o 00:04:26.917 CXX test/cpp_headers/opal_spec.o 00:04:26.917 CXX test/cpp_headers/pci_ids.o 00:04:27.176 CXX test/cpp_headers/pipe.o 00:04:27.176 CXX test/cpp_headers/queue.o 00:04:27.176 CXX test/cpp_headers/reduce.o 00:04:27.176 CXX test/cpp_headers/rpc.o 00:04:27.176 CXX test/cpp_headers/scheduler.o 00:04:27.176 CXX test/cpp_headers/scsi.o 00:04:27.176 CXX test/cpp_headers/scsi_spec.o 00:04:27.176 CXX test/cpp_headers/sock.o 00:04:27.176 CXX test/cpp_headers/stdinc.o 00:04:27.176 CXX test/cpp_headers/string.o 00:04:27.176 CXX test/cpp_headers/thread.o 00:04:27.176 CXX test/cpp_headers/trace.o 00:04:27.176 CXX test/cpp_headers/trace_parser.o 00:04:27.434 CXX test/cpp_headers/tree.o 00:04:27.434 CXX test/cpp_headers/ublk.o 00:04:27.434 CXX test/cpp_headers/util.o 00:04:27.434 CXX test/cpp_headers/uuid.o 00:04:27.434 CXX test/cpp_headers/version.o 00:04:27.434 CXX test/cpp_headers/vfio_user_pci.o 00:04:27.434 CXX test/cpp_headers/vfio_user_spec.o 00:04:27.434 CXX test/cpp_headers/vhost.o 00:04:27.434 CXX test/cpp_headers/vmd.o 00:04:27.434 CXX test/cpp_headers/xor.o 00:04:27.693 CXX test/cpp_headers/zipf.o 00:04:27.951 LINK cuse 00:04:32.137 LINK esnap 00:04:32.137 00:04:32.137 real 1m20.274s 00:04:32.137 user 7m34.945s 00:04:32.137 sys 1m42.206s 00:04:32.137 18:11:44 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:32.137 18:11:44 make -- common/autotest_common.sh@10 -- $ set +x 00:04:32.137 ************************************ 00:04:32.137 END TEST make 00:04:32.137 ************************************ 00:04:32.137 18:11:44 -- common/autotest_common.sh@1142 -- $ return 0 00:04:32.137 18:11:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:32.137 18:11:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:32.137 18:11:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:32.137 18:11:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.137 18:11:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:32.137 18:11:44 -- pm/common@44 -- $ pid=5180 00:04:32.137 18:11:44 -- pm/common@50 -- $ kill -TERM 5180 00:04:32.137 18:11:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.137 18:11:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:32.137 18:11:44 -- pm/common@44 -- $ pid=5182 00:04:32.137 18:11:44 -- pm/common@50 -- $ kill -TERM 5182 00:04:32.396 18:11:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.396 18:11:44 -- nvmf/common.sh@7 -- # uname -s 00:04:32.396 18:11:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.396 18:11:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.396 18:11:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.396 18:11:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.396 18:11:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.396 18:11:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.396 18:11:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.396 18:11:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.396 18:11:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.396 18:11:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.396 18:11:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bdffaca2-0b18-4758-9ce7-2e5bdb4d40b8 00:04:32.396 18:11:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=bdffaca2-0b18-4758-9ce7-2e5bdb4d40b8 00:04:32.396 18:11:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.396 18:11:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.396 18:11:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.396 18:11:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.396 18:11:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.396 18:11:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.396 18:11:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.396 18:11:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.396 18:11:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.396 18:11:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.396 18:11:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.396 18:11:44 -- paths/export.sh@5 -- # export PATH 00:04:32.396 18:11:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.396 18:11:44 -- nvmf/common.sh@47 -- # : 0 00:04:32.396 18:11:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:32.396 18:11:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:32.396 18:11:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.396 18:11:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.396 18:11:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.396 18:11:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:32.396 18:11:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:32.396 18:11:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:32.396 18:11:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:32.396 18:11:44 -- spdk/autotest.sh@32 -- # uname -s 00:04:32.396 18:11:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:32.396 18:11:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:32.396 18:11:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.396 18:11:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:32.396 18:11:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.396 18:11:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:32.396 18:11:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:32.396 18:11:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:32.396 18:11:44 -- spdk/autotest.sh@48 -- # udevadm_pid=53745 00:04:32.396 18:11:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:32.396 18:11:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:32.396 18:11:44 -- pm/common@17 -- # local monitor 00:04:32.396 18:11:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.396 18:11:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.396 18:11:44 -- pm/common@25 -- # sleep 1 00:04:32.396 18:11:44 -- pm/common@21 -- # date +%s 00:04:32.396 18:11:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721671904 00:04:32.396 18:11:44 -- pm/common@21 -- # date +%s 00:04:32.396 18:11:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721671904 00:04:32.396 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721671904_collect-vmstat.pm.log 00:04:32.396 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721671904_collect-cpu-load.pm.log 00:04:33.330 18:11:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:33.330 18:11:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:33.330 18:11:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.330 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:04:33.330 18:11:45 -- spdk/autotest.sh@59 -- # create_test_list 00:04:33.330 18:11:45 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:33.330 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:04:33.330 18:11:45 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:33.330 18:11:45 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:33.330 18:11:45 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:33.330 18:11:45 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:33.330 18:11:45 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:33.330 18:11:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:33.330 18:11:45 -- common/autotest_common.sh@1455 -- # uname 00:04:33.330 18:11:45 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:33.330 18:11:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:33.330 18:11:45 -- common/autotest_common.sh@1475 -- # uname 00:04:33.330 18:11:45 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:33.330 18:11:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:33.330 18:11:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:33.330 18:11:45 -- spdk/autotest.sh@72 -- # hash lcov 00:04:33.330 18:11:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:33.330 18:11:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:33.330 --rc lcov_branch_coverage=1 00:04:33.330 --rc lcov_function_coverage=1 00:04:33.330 --rc genhtml_branch_coverage=1 00:04:33.330 --rc genhtml_function_coverage=1 00:04:33.330 --rc genhtml_legend=1 00:04:33.330 --rc geninfo_all_blocks=1 00:04:33.330 ' 00:04:33.330 18:11:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:33.330 --rc lcov_branch_coverage=1 00:04:33.330 --rc lcov_function_coverage=1 00:04:33.330 --rc genhtml_branch_coverage=1 00:04:33.330 --rc genhtml_function_coverage=1 00:04:33.330 --rc genhtml_legend=1 00:04:33.330 --rc geninfo_all_blocks=1 00:04:33.330 ' 00:04:33.330 18:11:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:33.330 --rc lcov_branch_coverage=1 00:04:33.330 --rc lcov_function_coverage=1 00:04:33.330 --rc genhtml_branch_coverage=1 00:04:33.330 --rc genhtml_function_coverage=1 00:04:33.330 --rc genhtml_legend=1 00:04:33.330 --rc geninfo_all_blocks=1 00:04:33.330 --no-external' 00:04:33.330 18:11:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:33.330 --rc lcov_branch_coverage=1 00:04:33.330 --rc lcov_function_coverage=1 00:04:33.330 --rc genhtml_branch_coverage=1 00:04:33.330 --rc genhtml_function_coverage=1 00:04:33.330 --rc genhtml_legend=1 00:04:33.330 --rc geninfo_all_blocks=1 00:04:33.330 --no-external' 00:04:33.330 18:11:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:33.589 lcov: LCOV version 1.14 00:04:33.589 18:11:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:51.673 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:51.673 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:03.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:03.952 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:03.953 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:03.953 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:03.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:06.487 18:12:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:06.487 18:12:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.487 18:12:18 -- common/autotest_common.sh@10 -- # set +x 00:05:06.487 18:12:18 -- spdk/autotest.sh@91 -- # rm -f 00:05:06.487 18:12:18 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.621 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:07.621 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:07.621 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:07.621 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:07.621 18:12:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:07.621 18:12:19 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:07.621 18:12:19 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:07.621 18:12:19 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:07.621 18:12:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.621 18:12:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:07.621 18:12:19 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:07.621 18:12:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.621 18:12:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:07.621 18:12:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:07.621 18:12:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.621 18:12:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:07.621 18:12:19 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:07.621 18:12:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.621 18:12:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:07.621 18:12:19 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:07.621 18:12:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.621 18:12:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:07.621 18:12:19 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:07.621 18:12:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.621 18:12:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.621 18:12:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:07.622 18:12:19 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:07.622 18:12:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:07.622 18:12:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.622 18:12:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.622 18:12:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:07.622 18:12:19 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:07.622 18:12:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:07.622 18:12:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.622 18:12:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:07.622 18:12:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.622 18:12:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.622 18:12:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:07.622 18:12:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:07.622 18:12:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:07.881 No valid GPT data, bailing 00:05:07.881 18:12:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:07.881 18:12:19 -- scripts/common.sh@391 -- # pt= 00:05:07.881 18:12:19 -- scripts/common.sh@392 -- # return 1 00:05:07.881 18:12:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:07.881 1+0 records in 00:05:07.881 1+0 records out 00:05:07.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138997 s, 75.4 MB/s 00:05:07.881 18:12:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.881 18:12:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.881 18:12:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:07.881 18:12:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:07.881 18:12:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:07.881 No valid GPT data, bailing 00:05:07.881 18:12:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:07.881 18:12:19 -- scripts/common.sh@391 -- # pt= 00:05:07.881 18:12:19 -- scripts/common.sh@392 -- # return 1 00:05:07.881 18:12:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:07.881 1+0 records in 00:05:07.881 1+0 records out 00:05:07.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521975 s, 201 MB/s 00:05:07.881 18:12:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.881 18:12:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.881 18:12:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:07.881 18:12:19 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:07.881 18:12:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:07.881 No valid GPT data, bailing 00:05:07.881 18:12:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:07.881 18:12:19 -- scripts/common.sh@391 -- # pt= 00:05:07.881 18:12:19 -- scripts/common.sh@392 -- # return 1 00:05:07.881 18:12:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:07.881 1+0 records in 00:05:07.881 1+0 records out 00:05:07.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414473 s, 253 MB/s 00:05:07.881 18:12:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.881 18:12:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.881 18:12:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:07.881 18:12:19 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:07.881 18:12:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:08.139 No valid GPT data, bailing 00:05:08.139 18:12:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:08.139 18:12:19 -- scripts/common.sh@391 -- # pt= 00:05:08.139 18:12:19 -- scripts/common.sh@392 -- # return 1 00:05:08.139 18:12:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:08.139 1+0 records in 00:05:08.139 1+0 records out 00:05:08.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408113 s, 257 MB/s 00:05:08.139 18:12:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.139 18:12:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:08.139 18:12:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:08.139 18:12:19 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:08.139 18:12:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:08.139 No valid GPT data, bailing 00:05:08.139 18:12:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:08.139 18:12:20 -- scripts/common.sh@391 -- # pt= 00:05:08.139 18:12:20 -- scripts/common.sh@392 -- # return 1 00:05:08.139 18:12:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:08.139 1+0 records in 00:05:08.139 1+0 records out 00:05:08.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439044 s, 239 MB/s 00:05:08.139 18:12:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.139 18:12:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:08.139 18:12:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:08.139 18:12:20 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:08.139 18:12:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:08.139 No valid GPT data, bailing 00:05:08.139 18:12:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:08.139 18:12:20 -- scripts/common.sh@391 -- # pt= 00:05:08.139 18:12:20 -- scripts/common.sh@392 -- # return 1 00:05:08.139 18:12:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:08.139 1+0 records in 00:05:08.139 1+0 records out 00:05:08.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00343645 s, 305 MB/s 00:05:08.139 18:12:20 -- spdk/autotest.sh@118 -- # sync 00:05:08.403 18:12:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:08.403 18:12:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:08.403 18:12:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:10.314 18:12:21 -- spdk/autotest.sh@124 -- # uname -s 00:05:10.314 18:12:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:10.314 18:12:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:10.314 18:12:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.314 18:12:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.314 18:12:22 -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 ************************************ 00:05:10.314 START TEST setup.sh 00:05:10.314 ************************************ 00:05:10.314 18:12:22 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:10.314 * Looking for test storage... 00:05:10.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.314 18:12:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:10.314 18:12:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:10.314 18:12:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:10.314 18:12:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.314 18:12:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.314 18:12:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 ************************************ 00:05:10.314 START TEST acl 00:05:10.314 ************************************ 00:05:10.314 18:12:22 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:10.314 * Looking for test storage... 00:05:10.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.314 18:12:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:10.314 18:12:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:10.314 18:12:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:10.314 18:12:22 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:10.314 18:12:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:10.315 18:12:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.315 18:12:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:10.315 18:12:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:10.315 18:12:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:10.315 18:12:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:10.315 18:12:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:10.315 18:12:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.315 18:12:22 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.692 18:12:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:11.692 18:12:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:11.692 18:12:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.692 18:12:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:11.692 18:12:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.692 18:12:23 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.951 18:12:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:11.951 18:12:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:11.951 18:12:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.519 Hugepages 00:05:12.519 node hugesize free / total 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.519 00:05:12.519 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:12.519 18:12:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:12.778 18:12:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:12.778 18:12:24 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.778 18:12:24 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.778 18:12:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:12.778 ************************************ 00:05:12.778 START TEST denied 00:05:12.778 ************************************ 00:05:12.778 18:12:24 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:12.778 18:12:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:12.778 18:12:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:12.778 18:12:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:12.778 18:12:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.778 18:12:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.156 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.156 18:12:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.718 00:05:20.718 real 0m7.171s 00:05:20.718 user 0m0.817s 00:05:20.718 sys 0m1.388s 00:05:20.718 18:12:31 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.718 18:12:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:20.718 ************************************ 00:05:20.718 END TEST denied 00:05:20.718 ************************************ 00:05:20.718 18:12:31 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:20.718 18:12:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:20.718 18:12:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.718 18:12:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.718 18:12:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:20.718 ************************************ 00:05:20.718 START TEST allowed 00:05:20.718 ************************************ 00:05:20.718 18:12:31 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:20.718 18:12:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:20.718 18:12:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:20.718 18:12:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.718 18:12:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:20.718 18:12:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.977 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.977 18:12:32 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.352 00:05:22.352 real 0m2.269s 00:05:22.352 user 0m0.998s 00:05:22.352 sys 0m1.260s 00:05:22.352 18:12:34 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.352 18:12:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:22.352 ************************************ 00:05:22.352 END TEST allowed 00:05:22.352 ************************************ 00:05:22.352 18:12:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:22.352 ************************************ 00:05:22.352 END TEST acl 00:05:22.352 ************************************ 00:05:22.352 00:05:22.352 real 0m12.054s 00:05:22.352 user 0m3.065s 00:05:22.352 sys 0m4.022s 00:05:22.352 18:12:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.352 18:12:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:22.352 18:12:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:22.352 18:12:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:22.352 18:12:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.352 18:12:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.352 18:12:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.352 ************************************ 00:05:22.352 START TEST hugepages 00:05:22.352 ************************************ 00:05:22.352 18:12:34 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:22.352 * Looking for test storage... 00:05:22.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5823676 kB' 'MemAvailable: 7402892 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 444704 kB' 'Inactive: 1452452 kB' 'Active(anon): 112704 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 104396 kB' 'Mapped: 48792 kB' 'Shmem: 10512 kB' 'KReclaimable: 63408 kB' 'Slab: 136144 kB' 'SReclaimable: 63408 kB' 'SUnreclaim: 72736 kB' 'KernelStack: 6344 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 327028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.352 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.353 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:22.354 18:12:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:22.354 18:12:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.354 18:12:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.354 18:12:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.354 ************************************ 00:05:22.354 START TEST default_setup 00:05:22.354 ************************************ 00:05:22.354 18:12:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:22.354 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.613 18:12:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.750 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.750 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.750 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.750 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7962880 kB' 'MemAvailable: 9541920 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 462648 kB' 'Inactive: 1452476 kB' 'Active(anon): 130648 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121840 kB' 'Mapped: 48756 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135184 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72180 kB' 'KernelStack: 6416 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.750 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7962384 kB' 'MemAvailable: 9541424 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461952 kB' 'Inactive: 1452476 kB' 'Active(anon): 129952 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121024 kB' 'Mapped: 48796 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135176 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72172 kB' 'KernelStack: 6336 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.751 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.752 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7962132 kB' 'MemAvailable: 9541180 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461952 kB' 'Inactive: 1452484 kB' 'Active(anon): 129952 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121316 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135176 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72172 kB' 'KernelStack: 6352 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.754 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.755 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.017 nr_hugepages=1024 00:05:24.017 resv_hugepages=0 00:05:24.017 surplus_hugepages=0 00:05:24.017 anon_hugepages=0 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7961880 kB' 'MemAvailable: 9540928 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461960 kB' 'Inactive: 1452484 kB' 'Active(anon): 129960 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121276 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135176 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72172 kB' 'KernelStack: 6336 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7961880 kB' 'MemUsed: 4280100 kB' 'SwapCached: 0 kB' 'Active: 462172 kB' 'Inactive: 1452484 kB' 'Active(anon): 130172 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 1794956 kB' 'Mapped: 48736 kB' 'AnonPages: 121236 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135176 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:24.020 node0=1024 expecting 1024 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.020 00:05:24.020 real 0m1.480s 00:05:24.020 user 0m0.648s 00:05:24.020 sys 0m0.752s 00:05:24.020 ************************************ 00:05:24.020 END TEST default_setup 00:05:24.020 ************************************ 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.020 18:12:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:24.020 18:12:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:24.020 18:12:35 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:24.020 18:12:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.020 18:12:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.020 18:12:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.020 ************************************ 00:05:24.020 START TEST per_node_1G_alloc 00:05:24.020 ************************************ 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.020 18:12:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.542 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.542 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.542 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.542 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9009600 kB' 'MemAvailable: 10588648 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 462232 kB' 'Inactive: 1452484 kB' 'Active(anon): 130232 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121492 kB' 'Mapped: 48824 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135212 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72208 kB' 'KernelStack: 6372 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.542 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.543 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9009100 kB' 'MemAvailable: 10588148 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461992 kB' 'Inactive: 1452484 kB' 'Active(anon): 129992 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121312 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135212 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72208 kB' 'KernelStack: 6372 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.544 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.545 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9009100 kB' 'MemAvailable: 10588148 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 462228 kB' 'Inactive: 1452484 kB' 'Active(anon): 130228 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121232 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135208 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72204 kB' 'KernelStack: 6356 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.546 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.547 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.548 nr_hugepages=512 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:24.548 resv_hugepages=0 00:05:24.548 surplus_hugepages=0 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.548 anon_hugepages=0 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.548 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.809 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9008848 kB' 'MemAvailable: 10587896 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 462284 kB' 'Inactive: 1452484 kB' 'Active(anon): 130284 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121340 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135208 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72204 kB' 'KernelStack: 6372 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.810 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.811 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9008876 kB' 'MemUsed: 3233104 kB' 'SwapCached: 0 kB' 'Active: 462076 kB' 'Inactive: 1452484 kB' 'Active(anon): 130076 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1794956 kB' 'Mapped: 48740 kB' 'AnonPages: 121228 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135196 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.812 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.813 node0=512 expecting 512 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:24.813 00:05:24.813 real 0m0.713s 00:05:24.813 user 0m0.323s 00:05:24.813 sys 0m0.434s 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.813 18:12:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:24.813 ************************************ 00:05:24.813 END TEST per_node_1G_alloc 00:05:24.813 ************************************ 00:05:24.813 18:12:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:24.813 18:12:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:24.813 18:12:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.813 18:12:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.813 18:12:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.813 ************************************ 00:05:24.813 START TEST even_2G_alloc 00:05:24.813 ************************************ 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.813 18:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.334 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.334 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.334 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.334 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7959960 kB' 'MemAvailable: 9539008 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 462128 kB' 'Inactive: 1452484 kB' 'Active(anon): 130128 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121732 kB' 'Mapped: 49044 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135200 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72196 kB' 'KernelStack: 6300 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.334 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.335 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7959960 kB' 'MemAvailable: 9539008 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461924 kB' 'Inactive: 1452484 kB' 'Active(anon): 129924 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 121284 kB' 'Mapped: 48840 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135184 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72180 kB' 'KernelStack: 6288 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.336 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7960220 kB' 'MemAvailable: 9539268 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461948 kB' 'Inactive: 1452484 kB' 'Active(anon): 129948 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 121340 kB' 'Mapped: 48740 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135180 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72176 kB' 'KernelStack: 6320 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.337 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.338 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:25.339 nr_hugepages=1024 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.339 resv_hugepages=0 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.339 surplus_hugepages=0 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.339 anon_hugepages=0 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.339 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7960220 kB' 'MemAvailable: 9539268 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461812 kB' 'Inactive: 1452484 kB' 'Active(anon): 129812 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120876 kB' 'Mapped: 48740 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135180 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72176 kB' 'KernelStack: 6256 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7960220 kB' 'MemUsed: 4281760 kB' 'SwapCached: 0 kB' 'Active: 461812 kB' 'Inactive: 1452484 kB' 'Active(anon): 129812 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 1794956 kB' 'Mapped: 48740 kB' 'AnonPages: 121136 kB' 'Shmem: 10472 kB' 'KernelStack: 6324 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135180 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.343 node0=1024 expecting 1024 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.343 00:05:25.343 real 0m0.625s 00:05:25.343 user 0m0.280s 00:05:25.343 sys 0m0.381s 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.343 18:12:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 ************************************ 00:05:25.343 END TEST even_2G_alloc 00:05:25.343 ************************************ 00:05:25.343 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:25.343 18:12:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:25.343 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.343 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.343 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 ************************************ 00:05:25.343 START TEST odd_alloc 00:05:25.343 ************************************ 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.343 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.867 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.867 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.867 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.867 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7958332 kB' 'MemAvailable: 9537380 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 462780 kB' 'Inactive: 1452484 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121952 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135168 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72164 kB' 'KernelStack: 6388 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.867 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7958584 kB' 'MemAvailable: 9537632 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461988 kB' 'Inactive: 1452484 kB' 'Active(anon): 129988 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121376 kB' 'Mapped: 48728 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135164 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72160 kB' 'KernelStack: 6368 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.868 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.869 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7958584 kB' 'MemAvailable: 9537632 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 461904 kB' 'Inactive: 1452484 kB' 'Active(anon): 129904 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121268 kB' 'Mapped: 48744 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135152 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72148 kB' 'KernelStack: 6304 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:25.872 nr_hugepages=1025 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:25.872 resv_hugepages=0 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.872 surplus_hugepages=0 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.872 anon_hugepages=0 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7958584 kB' 'MemAvailable: 9537632 kB' 'Buffers: 2436 kB' 'Cached: 1792520 kB' 'SwapCached: 0 kB' 'Active: 462168 kB' 'Inactive: 1452484 kB' 'Active(anon): 130168 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121316 kB' 'Mapped: 48744 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135148 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72144 kB' 'KernelStack: 6320 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.872 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.873 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7958584 kB' 'MemUsed: 4283396 kB' 'SwapCached: 0 kB' 'Active: 461948 kB' 'Inactive: 1452484 kB' 'Active(anon): 129948 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1794956 kB' 'Mapped: 48744 kB' 'AnonPages: 121052 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135148 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:25.874 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.875 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.875 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.134 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.135 node0=1025 expecting 1025 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:26.135 00:05:26.135 real 0m0.567s 00:05:26.135 user 0m0.286s 00:05:26.135 sys 0m0.320s 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.135 18:12:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:26.135 ************************************ 00:05:26.135 END TEST odd_alloc 00:05:26.135 ************************************ 00:05:26.135 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:26.135 18:12:37 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:26.135 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.135 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.135 18:12:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.135 ************************************ 00:05:26.135 START TEST custom_alloc 00:05:26.135 ************************************ 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:26.135 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.136 18:12:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.657 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.657 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.657 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.657 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.657 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9010300 kB' 'MemAvailable: 10589356 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 462332 kB' 'Inactive: 1452492 kB' 'Active(anon): 130332 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121500 kB' 'Mapped: 49048 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135104 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6324 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.658 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9010048 kB' 'MemAvailable: 10589104 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 462020 kB' 'Inactive: 1452492 kB' 'Active(anon): 130020 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121376 kB' 'Mapped: 48944 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135116 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72112 kB' 'KernelStack: 6244 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.659 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.660 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9010048 kB' 'MemAvailable: 10589104 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 461908 kB' 'Inactive: 1452492 kB' 'Active(anon): 129908 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121128 kB' 'Mapped: 48744 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135124 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72120 kB' 'KernelStack: 6320 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.661 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.662 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:26.663 nr_hugepages=512 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:26.663 resv_hugepages=0 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.663 surplus_hugepages=0 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.663 anon_hugepages=0 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9010048 kB' 'MemAvailable: 10589104 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 461904 kB' 'Inactive: 1452492 kB' 'Active(anon): 129904 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121072 kB' 'Mapped: 48744 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135124 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72120 kB' 'KernelStack: 6320 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.663 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.664 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9010048 kB' 'MemUsed: 3231932 kB' 'SwapCached: 0 kB' 'Active: 461920 kB' 'Inactive: 1452492 kB' 'Active(anon): 129920 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1794964 kB' 'Mapped: 48744 kB' 'AnonPages: 121084 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135120 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.665 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.666 node0=512 expecting 512 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:26.666 00:05:26.666 real 0m0.684s 00:05:26.666 user 0m0.311s 00:05:26.666 sys 0m0.417s 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.666 18:12:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:26.666 ************************************ 00:05:26.666 END TEST custom_alloc 00:05:26.666 ************************************ 00:05:26.666 18:12:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:26.666 18:12:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:26.667 18:12:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.667 18:12:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.667 18:12:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.925 ************************************ 00:05:26.925 START TEST no_shrink_alloc 00:05:26.925 ************************************ 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.925 18:12:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.184 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.184 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.184 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.184 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7956912 kB' 'MemAvailable: 9535968 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 462768 kB' 'Inactive: 1452492 kB' 'Active(anon): 130768 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121876 kB' 'Mapped: 48892 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135128 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72124 kB' 'KernelStack: 6360 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.447 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.448 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7956912 kB' 'MemAvailable: 9535968 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 460060 kB' 'Inactive: 1452492 kB' 'Active(anon): 128060 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119128 kB' 'Mapped: 48372 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135128 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72124 kB' 'KernelStack: 6328 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.449 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.450 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7956912 kB' 'MemAvailable: 9535964 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 459460 kB' 'Inactive: 1452492 kB' 'Active(anon): 127460 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118560 kB' 'Mapped: 48124 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 135104 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6256 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.451 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.452 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.453 nr_hugepages=1024 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.453 resv_hugepages=0 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.453 surplus_hugepages=0 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.453 anon_hugepages=0 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7956912 kB' 'MemAvailable: 9535964 kB' 'Buffers: 2436 kB' 'Cached: 1792528 kB' 'SwapCached: 0 kB' 'Active: 459464 kB' 'Inactive: 1452492 kB' 'Active(anon): 127464 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118828 kB' 'Mapped: 48124 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 135104 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6256 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.453 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.454 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7956912 kB' 'MemUsed: 4285068 kB' 'SwapCached: 0 kB' 'Active: 459256 kB' 'Inactive: 1452492 kB' 'Active(anon): 127256 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1794964 kB' 'Mapped: 48004 kB' 'AnonPages: 118616 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63000 kB' 'Slab: 135080 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.455 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:27.456 node0=1024 expecting 1024 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.456 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.977 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.977 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.977 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.977 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.977 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7959352 kB' 'MemAvailable: 9538400 kB' 'Buffers: 2436 kB' 'Cached: 1792524 kB' 'SwapCached: 0 kB' 'Active: 460504 kB' 'Inactive: 1452488 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 47972 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 134924 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6344 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.977 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.978 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7959456 kB' 'MemAvailable: 9538504 kB' 'Buffers: 2436 kB' 'Cached: 1792524 kB' 'SwapCached: 0 kB' 'Active: 459808 kB' 'Inactive: 1452488 kB' 'Active(anon): 127808 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118908 kB' 'Mapped: 47984 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 134928 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 71928 kB' 'KernelStack: 6264 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.979 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.980 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7959456 kB' 'MemAvailable: 9538504 kB' 'Buffers: 2436 kB' 'Cached: 1792524 kB' 'SwapCached: 0 kB' 'Active: 459476 kB' 'Inactive: 1452488 kB' 'Active(anon): 127476 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118832 kB' 'Mapped: 48004 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 134924 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6224 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.981 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.982 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:27.983 nr_hugepages=1024 00:05:27.983 resv_hugepages=0 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.983 surplus_hugepages=0 00:05:27.983 anon_hugepages=0 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7959868 kB' 'MemAvailable: 9538916 kB' 'Buffers: 2436 kB' 'Cached: 1792524 kB' 'SwapCached: 0 kB' 'Active: 459312 kB' 'Inactive: 1452488 kB' 'Active(anon): 127312 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118704 kB' 'Mapped: 48004 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 134924 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6256 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.983 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.244 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.245 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7959868 kB' 'MemUsed: 4282112 kB' 'SwapCached: 0 kB' 'Active: 459248 kB' 'Inactive: 1452488 kB' 'Active(anon): 127248 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1452488 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1794960 kB' 'Mapped: 48004 kB' 'AnonPages: 118604 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63000 kB' 'Slab: 134920 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 71920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.246 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.247 node0=1024 expecting 1024 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:28.247 00:05:28.247 real 0m1.378s 00:05:28.247 user 0m0.657s 00:05:28.247 sys 0m0.807s 00:05:28.247 ************************************ 00:05:28.247 END TEST no_shrink_alloc 00:05:28.247 ************************************ 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.247 18:12:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.247 18:12:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:28.247 18:12:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:28.247 ************************************ 00:05:28.247 END TEST hugepages 00:05:28.247 ************************************ 00:05:28.247 00:05:28.247 real 0m5.883s 00:05:28.247 user 0m2.676s 00:05:28.247 sys 0m3.356s 00:05:28.247 18:12:40 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.247 18:12:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.247 18:12:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:28.247 18:12:40 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:28.247 18:12:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.247 18:12:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.247 18:12:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:28.247 ************************************ 00:05:28.247 START TEST driver 00:05:28.247 ************************************ 00:05:28.247 18:12:40 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:28.247 * Looking for test storage... 00:05:28.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:28.247 18:12:40 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:28.247 18:12:40 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.247 18:12:40 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.846 18:12:46 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:34.846 18:12:46 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.846 18:12:46 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.846 18:12:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:34.846 ************************************ 00:05:34.846 START TEST guess_driver 00:05:34.846 ************************************ 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:34.846 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:34.846 Looking for driver=uio_pci_generic 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:34.846 18:12:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:35.414 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:35.693 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:35.693 18:12:47 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:35.693 18:12:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:35.693 18:12:47 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.257 00:05:42.257 real 0m7.228s 00:05:42.257 user 0m0.814s 00:05:42.257 sys 0m1.475s 00:05:42.257 18:12:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.257 ************************************ 00:05:42.257 END TEST guess_driver 00:05:42.257 ************************************ 00:05:42.257 18:12:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 18:12:53 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:42.257 ************************************ 00:05:42.257 END TEST driver 00:05:42.257 ************************************ 00:05:42.257 00:05:42.257 real 0m13.305s 00:05:42.257 user 0m1.149s 00:05:42.257 sys 0m2.301s 00:05:42.257 18:12:53 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.257 18:12:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 18:12:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:42.257 18:12:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:42.257 18:12:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.257 18:12:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.257 18:12:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 ************************************ 00:05:42.257 START TEST devices 00:05:42.257 ************************************ 00:05:42.257 18:12:53 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:42.257 * Looking for test storage... 00:05:42.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:42.257 18:12:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:42.257 18:12:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:42.257 18:12:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.257 18:12:53 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.823 18:12:54 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:42.823 18:12:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:42.823 18:12:54 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:42.823 18:12:54 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:42.823 18:12:54 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:42.824 18:12:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:42.824 18:12:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:42.824 No valid GPT data, bailing 00:05:42.824 18:12:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:42.824 18:12:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:42.824 18:12:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:42.824 18:12:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:42.824 18:12:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:42.824 18:12:54 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:42.824 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:42.824 18:12:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:42.824 18:12:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:43.082 No valid GPT data, bailing 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:43.082 18:12:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:43.082 18:12:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:43.082 18:12:54 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:43.082 No valid GPT data, bailing 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:43.082 18:12:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:43.082 18:12:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:43.082 18:12:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:43.082 18:12:54 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:43.082 18:12:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:43.083 18:12:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:43.083 18:12:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:43.083 No valid GPT data, bailing 00:05:43.083 18:12:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:43.083 18:12:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:43.083 18:12:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:43.083 18:12:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:43.083 18:12:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:43.083 18:12:55 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:43.083 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:43.083 18:12:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:43.083 18:12:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:43.342 No valid GPT data, bailing 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:43.342 18:12:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:43.342 18:12:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:43.342 18:12:55 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:43.342 No valid GPT data, bailing 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:43.342 18:12:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:43.342 18:12:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:43.342 18:12:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:43.342 18:12:55 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:43.342 18:12:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:43.342 18:12:55 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.342 18:12:55 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.342 18:12:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:43.342 ************************************ 00:05:43.342 START TEST nvme_mount 00:05:43.342 ************************************ 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:43.342 18:12:55 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:44.285 Creating new GPT entries in memory. 00:05:44.285 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:44.285 other utilities. 00:05:44.285 18:12:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:44.285 18:12:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:44.285 18:12:56 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:44.285 18:12:56 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:44.285 18:12:56 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:45.660 Creating new GPT entries in memory. 00:05:45.660 The operation has completed successfully. 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59469 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.660 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.919 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.919 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.919 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.919 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.919 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.919 18:12:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.179 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.179 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:46.439 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:46.439 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:46.697 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:46.697 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:46.697 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:46.697 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.697 18:12:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.956 18:12:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.216 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.216 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.216 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.216 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.474 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.474 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.733 18:12:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.991 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.991 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:47.991 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:47.991 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.991 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.992 18:12:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.250 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.250 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.250 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.250 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.250 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.250 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.509 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.509 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:48.769 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:48.769 00:05:48.769 real 0m5.395s 00:05:48.769 user 0m1.472s 00:05:48.769 sys 0m1.600s 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.769 18:13:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:48.769 ************************************ 00:05:48.769 END TEST nvme_mount 00:05:48.769 ************************************ 00:05:48.769 18:13:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:48.769 18:13:00 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:48.769 18:13:00 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.769 18:13:00 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.769 18:13:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:48.769 ************************************ 00:05:48.769 START TEST dm_mount 00:05:48.769 ************************************ 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:48.769 18:13:00 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:49.704 Creating new GPT entries in memory. 00:05:49.704 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:49.704 other utilities. 00:05:49.704 18:13:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:49.704 18:13:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.704 18:13:01 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:49.704 18:13:01 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:49.704 18:13:01 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:51.130 Creating new GPT entries in memory. 00:05:51.130 The operation has completed successfully. 00:05:51.130 18:13:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:51.130 18:13:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.130 18:13:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:51.130 18:13:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:51.130 18:13:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:52.066 The operation has completed successfully. 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60098 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.066 18:13:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.066 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.066 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:52.066 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:52.066 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.066 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.066 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.324 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.324 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.324 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.324 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.324 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.324 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.584 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.584 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.843 18:13:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.101 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.101 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:53.101 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:53.101 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.101 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.101 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.361 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.361 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.361 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.361 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.361 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.361 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.621 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.621 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:53.880 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:53.880 00:05:53.880 real 0m5.156s 00:05:53.880 user 0m0.952s 00:05:53.880 sys 0m1.125s 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.880 ************************************ 00:05:53.880 END TEST dm_mount 00:05:53.880 ************************************ 00:05:53.880 18:13:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:53.880 18:13:05 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:53.880 18:13:05 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:53.880 18:13:05 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:53.880 18:13:05 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.880 18:13:05 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.880 18:13:05 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:53.880 18:13:05 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:53.880 18:13:05 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:54.447 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.447 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.447 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:54.447 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:54.447 18:13:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:54.447 18:13:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.447 18:13:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.447 18:13:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.447 18:13:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.447 18:13:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.447 18:13:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:54.447 00:05:54.448 real 0m12.667s 00:05:54.448 user 0m3.358s 00:05:54.448 sys 0m3.594s 00:05:54.448 18:13:06 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.448 ************************************ 00:05:54.448 END TEST devices 00:05:54.448 ************************************ 00:05:54.448 18:13:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:54.448 18:13:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:54.448 00:05:54.448 real 0m44.199s 00:05:54.448 user 0m10.351s 00:05:54.448 sys 0m13.450s 00:05:54.448 18:13:06 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.448 18:13:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:54.448 ************************************ 00:05:54.448 END TEST setup.sh 00:05:54.448 ************************************ 00:05:54.448 18:13:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.448 18:13:06 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:55.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.274 Hugepages 00:05:55.274 node hugesize free / total 00:05:55.274 node0 1048576kB 0 / 0 00:05:55.274 node0 2048kB 2048 / 2048 00:05:55.274 00:05:55.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.532 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:55.532 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:55.532 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:55.790 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:55.790 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:55.790 18:13:07 -- spdk/autotest.sh@130 -- # uname -s 00:05:55.790 18:13:07 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:55.790 18:13:07 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:55.790 18:13:07 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.927 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.927 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.927 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.927 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.927 18:13:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:57.890 18:13:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:57.890 18:13:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:57.890 18:13:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:57.890 18:13:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:57.890 18:13:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:57.890 18:13:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:57.890 18:13:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:57.890 18:13:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:57.890 18:13:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:58.149 18:13:09 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:58.149 18:13:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:58.149 18:13:09 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.666 Waiting for block devices as requested 00:05:58.666 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.666 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.666 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.924 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:04.211 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:04.211 18:13:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:04.212 18:13:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:04.212 18:13:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:04.212 18:13:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1557 -- # continue 00:06:04.212 18:13:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:04.212 18:13:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1557 -- # continue 00:06:04.212 18:13:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:04.212 18:13:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1557 -- # continue 00:06:04.212 18:13:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:06:04.212 18:13:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:04.212 18:13:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:04.212 18:13:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:04.212 18:13:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:04.212 18:13:15 -- common/autotest_common.sh@1557 -- # continue 00:06:04.212 18:13:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:04.212 18:13:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.212 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.212 18:13:16 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:04.212 18:13:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.212 18:13:16 -- common/autotest_common.sh@10 -- # set +x 00:06:04.212 18:13:16 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.348 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.348 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.348 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.348 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.348 18:13:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:05.348 18:13:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.348 18:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 18:13:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:05.348 18:13:17 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:05.348 18:13:17 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:05.348 18:13:17 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:05.348 18:13:17 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:05.348 18:13:17 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:05.348 18:13:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:05.348 18:13:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:05.348 18:13:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.348 18:13:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.348 18:13:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:05.606 18:13:17 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:05.606 18:13:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:05.606 18:13:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:05.606 18:13:17 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.606 18:13:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:05.606 18:13:17 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.606 18:13:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:05.606 18:13:17 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.606 18:13:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:05.606 18:13:17 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:05.606 18:13:17 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.606 18:13:17 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:05.606 18:13:17 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:05.606 18:13:17 -- common/autotest_common.sh@1593 -- # return 0 00:06:05.606 18:13:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:05.606 18:13:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:05.606 18:13:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:05.606 18:13:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:05.606 18:13:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:05.606 18:13:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.606 18:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.606 18:13:17 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:05.606 18:13:17 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:05.606 18:13:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.606 18:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.606 18:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.606 ************************************ 00:06:05.606 START TEST env 00:06:05.606 ************************************ 00:06:05.606 18:13:17 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:05.606 * Looking for test storage... 00:06:05.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:05.606 18:13:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:05.606 18:13:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.606 18:13:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.606 18:13:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.606 ************************************ 00:06:05.606 START TEST env_memory 00:06:05.606 ************************************ 00:06:05.606 18:13:17 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:05.606 00:06:05.606 00:06:05.606 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.606 http://cunit.sourceforge.net/ 00:06:05.606 00:06:05.606 00:06:05.606 Suite: memory 00:06:05.606 Test: alloc and free memory map ...[2024-07-22 18:13:17.611045] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:05.865 passed 00:06:05.865 Test: mem map translation ...[2024-07-22 18:13:17.673478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:05.865 [2024-07-22 18:13:17.673582] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:05.865 [2024-07-22 18:13:17.673699] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:05.865 [2024-07-22 18:13:17.673749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:05.865 passed 00:06:05.865 Test: mem map registration ...[2024-07-22 18:13:17.775570] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:05.865 [2024-07-22 18:13:17.775672] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:05.865 passed 00:06:06.123 Test: mem map adjacent registrations ...passed 00:06:06.123 00:06:06.123 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.123 suites 1 1 n/a 0 0 00:06:06.123 tests 4 4 4 0 0 00:06:06.123 asserts 152 152 152 0 n/a 00:06:06.123 00:06:06.123 Elapsed time = 0.354 seconds 00:06:06.123 00:06:06.123 real 0m0.393s 00:06:06.123 user 0m0.365s 00:06:06.123 sys 0m0.023s 00:06:06.123 18:13:17 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.123 ************************************ 00:06:06.123 END TEST env_memory 00:06:06.123 ************************************ 00:06:06.123 18:13:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:06.123 18:13:17 env -- common/autotest_common.sh@1142 -- # return 0 00:06:06.123 18:13:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:06.123 18:13:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.123 18:13:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.123 18:13:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.123 ************************************ 00:06:06.123 START TEST env_vtophys 00:06:06.123 ************************************ 00:06:06.123 18:13:17 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:06.123 EAL: lib.eal log level changed from notice to debug 00:06:06.123 EAL: Detected lcore 0 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 1 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 2 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 3 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 4 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 5 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 6 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 7 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 8 as core 0 on socket 0 00:06:06.123 EAL: Detected lcore 9 as core 0 on socket 0 00:06:06.123 EAL: Maximum logical cores by configuration: 128 00:06:06.123 EAL: Detected CPU lcores: 10 00:06:06.123 EAL: Detected NUMA nodes: 1 00:06:06.123 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:06.123 EAL: Detected shared linkage of DPDK 00:06:06.123 EAL: No shared files mode enabled, IPC will be disabled 00:06:06.123 EAL: Selected IOVA mode 'PA' 00:06:06.123 EAL: Probing VFIO support... 00:06:06.123 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:06.123 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:06.123 EAL: Ask a virtual area of 0x2e000 bytes 00:06:06.124 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:06.124 EAL: Setting up physically contiguous memory... 00:06:06.124 EAL: Setting maximum number of open files to 524288 00:06:06.124 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:06.124 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:06.124 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.124 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:06.124 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.124 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.124 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:06.124 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:06.124 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.124 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:06.124 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.124 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.124 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:06.124 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:06.124 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.124 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:06.124 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.124 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.124 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:06.124 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:06.124 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.124 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:06.124 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.124 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.124 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:06.124 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:06.124 EAL: Hugepages will be freed exactly as allocated. 00:06:06.124 EAL: No shared files mode enabled, IPC is disabled 00:06:06.124 EAL: No shared files mode enabled, IPC is disabled 00:06:06.381 EAL: TSC frequency is ~2200000 KHz 00:06:06.381 EAL: Main lcore 0 is ready (tid=7f26278d4a40;cpuset=[0]) 00:06:06.381 EAL: Trying to obtain current memory policy. 00:06:06.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.381 EAL: Restoring previous memory policy: 0 00:06:06.381 EAL: request: mp_malloc_sync 00:06:06.381 EAL: No shared files mode enabled, IPC is disabled 00:06:06.381 EAL: Heap on socket 0 was expanded by 2MB 00:06:06.381 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:06.381 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:06.381 EAL: Mem event callback 'spdk:(nil)' registered 00:06:06.381 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:06.381 00:06:06.382 00:06:06.382 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.382 http://cunit.sourceforge.net/ 00:06:06.382 00:06:06.382 00:06:06.382 Suite: components_suite 00:06:06.947 Test: vtophys_malloc_test ...passed 00:06:06.947 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:06.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.947 EAL: Restoring previous memory policy: 4 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was expanded by 4MB 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was shrunk by 4MB 00:06:06.947 EAL: Trying to obtain current memory policy. 00:06:06.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.947 EAL: Restoring previous memory policy: 4 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was expanded by 6MB 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was shrunk by 6MB 00:06:06.947 EAL: Trying to obtain current memory policy. 00:06:06.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.947 EAL: Restoring previous memory policy: 4 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was expanded by 10MB 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was shrunk by 10MB 00:06:06.947 EAL: Trying to obtain current memory policy. 00:06:06.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.947 EAL: Restoring previous memory policy: 4 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was expanded by 18MB 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was shrunk by 18MB 00:06:06.947 EAL: Trying to obtain current memory policy. 00:06:06.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.947 EAL: Restoring previous memory policy: 4 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was expanded by 34MB 00:06:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.947 EAL: request: mp_malloc_sync 00:06:06.947 EAL: No shared files mode enabled, IPC is disabled 00:06:06.947 EAL: Heap on socket 0 was shrunk by 34MB 00:06:06.947 EAL: Trying to obtain current memory policy. 00:06:06.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.206 EAL: Restoring previous memory policy: 4 00:06:07.206 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.206 EAL: request: mp_malloc_sync 00:06:07.206 EAL: No shared files mode enabled, IPC is disabled 00:06:07.206 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.206 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.206 EAL: request: mp_malloc_sync 00:06:07.206 EAL: No shared files mode enabled, IPC is disabled 00:06:07.206 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.206 EAL: Trying to obtain current memory policy. 00:06:07.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.206 EAL: Restoring previous memory policy: 4 00:06:07.206 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.206 EAL: request: mp_malloc_sync 00:06:07.206 EAL: No shared files mode enabled, IPC is disabled 00:06:07.206 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.465 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.465 EAL: request: mp_malloc_sync 00:06:07.465 EAL: No shared files mode enabled, IPC is disabled 00:06:07.465 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.724 EAL: Trying to obtain current memory policy. 00:06:07.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.724 EAL: Restoring previous memory policy: 4 00:06:07.724 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.724 EAL: request: mp_malloc_sync 00:06:07.724 EAL: No shared files mode enabled, IPC is disabled 00:06:07.724 EAL: Heap on socket 0 was expanded by 258MB 00:06:08.301 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.301 EAL: request: mp_malloc_sync 00:06:08.301 EAL: No shared files mode enabled, IPC is disabled 00:06:08.301 EAL: Heap on socket 0 was shrunk by 258MB 00:06:08.560 EAL: Trying to obtain current memory policy. 00:06:08.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.819 EAL: Restoring previous memory policy: 4 00:06:08.819 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.819 EAL: request: mp_malloc_sync 00:06:08.819 EAL: No shared files mode enabled, IPC is disabled 00:06:08.819 EAL: Heap on socket 0 was expanded by 514MB 00:06:09.759 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.759 EAL: request: mp_malloc_sync 00:06:09.759 EAL: No shared files mode enabled, IPC is disabled 00:06:09.759 EAL: Heap on socket 0 was shrunk by 514MB 00:06:10.328 EAL: Trying to obtain current memory policy. 00:06:10.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.586 EAL: Restoring previous memory policy: 4 00:06:10.586 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.586 EAL: request: mp_malloc_sync 00:06:10.587 EAL: No shared files mode enabled, IPC is disabled 00:06:10.587 EAL: Heap on socket 0 was expanded by 1026MB 00:06:12.491 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.491 EAL: request: mp_malloc_sync 00:06:12.491 EAL: No shared files mode enabled, IPC is disabled 00:06:12.491 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:13.864 passed 00:06:13.864 00:06:13.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.864 suites 1 1 n/a 0 0 00:06:13.864 tests 2 2 2 0 0 00:06:13.864 asserts 5334 5334 5334 0 n/a 00:06:13.864 00:06:13.864 Elapsed time = 7.476 seconds 00:06:13.864 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.864 EAL: request: mp_malloc_sync 00:06:13.864 EAL: No shared files mode enabled, IPC is disabled 00:06:13.864 EAL: Heap on socket 0 was shrunk by 2MB 00:06:13.864 EAL: No shared files mode enabled, IPC is disabled 00:06:13.864 EAL: No shared files mode enabled, IPC is disabled 00:06:13.864 EAL: No shared files mode enabled, IPC is disabled 00:06:13.864 00:06:13.864 real 0m7.800s 00:06:13.864 user 0m6.513s 00:06:13.864 sys 0m1.114s 00:06:13.864 18:13:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.864 ************************************ 00:06:13.864 END TEST env_vtophys 00:06:13.864 ************************************ 00:06:13.864 18:13:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:13.864 18:13:25 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.864 18:13:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:13.864 18:13:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.864 18:13:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.864 18:13:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.864 ************************************ 00:06:13.864 START TEST env_pci 00:06:13.864 ************************************ 00:06:13.864 18:13:25 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:13.864 00:06:13.864 00:06:13.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.864 http://cunit.sourceforge.net/ 00:06:13.864 00:06:13.864 00:06:13.864 Suite: pci 00:06:13.864 Test: pci_hook ...[2024-07-22 18:13:25.865897] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61944 has claimed it 00:06:14.122 passed 00:06:14.122 00:06:14.122 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.122 suites 1 1 n/a 0 0 00:06:14.122 tests 1 1 1 0 0 00:06:14.122 asserts 25 25 25 0 n/a 00:06:14.122 00:06:14.122 Elapsed time = 0.009 seconds 00:06:14.122 EAL: Cannot find device (10000:00:01.0) 00:06:14.122 EAL: Failed to attach device on primary process 00:06:14.122 00:06:14.122 real 0m0.086s 00:06:14.122 user 0m0.035s 00:06:14.122 sys 0m0.050s 00:06:14.122 18:13:25 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.122 ************************************ 00:06:14.122 END TEST env_pci 00:06:14.122 ************************************ 00:06:14.122 18:13:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:14.122 18:13:25 env -- common/autotest_common.sh@1142 -- # return 0 00:06:14.122 18:13:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:14.122 18:13:25 env -- env/env.sh@15 -- # uname 00:06:14.122 18:13:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:14.122 18:13:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:14.122 18:13:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.122 18:13:25 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:14.122 18:13:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.122 18:13:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.122 ************************************ 00:06:14.122 START TEST env_dpdk_post_init 00:06:14.122 ************************************ 00:06:14.122 18:13:25 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.122 EAL: Detected CPU lcores: 10 00:06:14.122 EAL: Detected NUMA nodes: 1 00:06:14.122 EAL: Detected shared linkage of DPDK 00:06:14.122 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:14.122 EAL: Selected IOVA mode 'PA' 00:06:14.381 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:14.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:14.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:14.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:14.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:14.381 Starting DPDK initialization... 00:06:14.381 Starting SPDK post initialization... 00:06:14.381 SPDK NVMe probe 00:06:14.381 Attaching to 0000:00:10.0 00:06:14.381 Attaching to 0000:00:11.0 00:06:14.381 Attaching to 0000:00:12.0 00:06:14.381 Attaching to 0000:00:13.0 00:06:14.381 Attached to 0000:00:10.0 00:06:14.381 Attached to 0000:00:11.0 00:06:14.381 Attached to 0000:00:13.0 00:06:14.381 Attached to 0000:00:12.0 00:06:14.381 Cleaning up... 00:06:14.381 ************************************ 00:06:14.381 END TEST env_dpdk_post_init 00:06:14.381 ************************************ 00:06:14.381 00:06:14.381 real 0m0.326s 00:06:14.381 user 0m0.117s 00:06:14.381 sys 0m0.109s 00:06:14.381 18:13:26 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.381 18:13:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.381 18:13:26 env -- common/autotest_common.sh@1142 -- # return 0 00:06:14.381 18:13:26 env -- env/env.sh@26 -- # uname 00:06:14.381 18:13:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:14.381 18:13:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:14.381 18:13:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.381 18:13:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.381 18:13:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.381 ************************************ 00:06:14.381 START TEST env_mem_callbacks 00:06:14.381 ************************************ 00:06:14.381 18:13:26 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:14.639 EAL: Detected CPU lcores: 10 00:06:14.639 EAL: Detected NUMA nodes: 1 00:06:14.639 EAL: Detected shared linkage of DPDK 00:06:14.639 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:14.639 EAL: Selected IOVA mode 'PA' 00:06:14.639 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:14.639 00:06:14.639 00:06:14.639 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.639 http://cunit.sourceforge.net/ 00:06:14.640 00:06:14.640 00:06:14.640 Suite: memory 00:06:14.640 Test: test ... 00:06:14.640 register 0x200000200000 2097152 00:06:14.640 malloc 3145728 00:06:14.640 register 0x200000400000 4194304 00:06:14.640 buf 0x2000004fffc0 len 3145728 PASSED 00:06:14.640 malloc 64 00:06:14.640 buf 0x2000004ffec0 len 64 PASSED 00:06:14.640 malloc 4194304 00:06:14.640 register 0x200000800000 6291456 00:06:14.640 buf 0x2000009fffc0 len 4194304 PASSED 00:06:14.640 free 0x2000004fffc0 3145728 00:06:14.640 free 0x2000004ffec0 64 00:06:14.640 unregister 0x200000400000 4194304 PASSED 00:06:14.640 free 0x2000009fffc0 4194304 00:06:14.640 unregister 0x200000800000 6291456 PASSED 00:06:14.640 malloc 8388608 00:06:14.640 register 0x200000400000 10485760 00:06:14.640 buf 0x2000005fffc0 len 8388608 PASSED 00:06:14.640 free 0x2000005fffc0 8388608 00:06:14.640 unregister 0x200000400000 10485760 PASSED 00:06:14.640 passed 00:06:14.640 00:06:14.640 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.640 suites 1 1 n/a 0 0 00:06:14.640 tests 1 1 1 0 0 00:06:14.640 asserts 15 15 15 0 n/a 00:06:14.640 00:06:14.640 Elapsed time = 0.060 seconds 00:06:14.640 ************************************ 00:06:14.640 END TEST env_mem_callbacks 00:06:14.640 ************************************ 00:06:14.640 00:06:14.640 real 0m0.256s 00:06:14.640 user 0m0.084s 00:06:14.640 sys 0m0.069s 00:06:14.640 18:13:26 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.640 18:13:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:14.640 18:13:26 env -- common/autotest_common.sh@1142 -- # return 0 00:06:14.640 ************************************ 00:06:14.640 END TEST env 00:06:14.640 ************************************ 00:06:14.640 00:06:14.640 real 0m9.212s 00:06:14.640 user 0m7.250s 00:06:14.640 sys 0m1.566s 00:06:14.640 18:13:26 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.640 18:13:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.897 18:13:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.897 18:13:26 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:14.897 18:13:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.897 18:13:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.897 18:13:26 -- common/autotest_common.sh@10 -- # set +x 00:06:14.897 ************************************ 00:06:14.897 START TEST rpc 00:06:14.897 ************************************ 00:06:14.897 18:13:26 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:14.897 * Looking for test storage... 00:06:14.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:14.897 18:13:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:14.897 18:13:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62062 00:06:14.897 18:13:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.897 18:13:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62062 00:06:14.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.897 18:13:26 rpc -- common/autotest_common.sh@829 -- # '[' -z 62062 ']' 00:06:14.897 18:13:26 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.897 18:13:26 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.897 18:13:26 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.897 18:13:26 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.897 18:13:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.897 [2024-07-22 18:13:26.898021] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:14.897 [2024-07-22 18:13:26.898181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62062 ] 00:06:15.155 [2024-07-22 18:13:27.065441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.413 [2024-07-22 18:13:27.322189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:15.413 [2024-07-22 18:13:27.322265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62062' to capture a snapshot of events at runtime. 00:06:15.413 [2024-07-22 18:13:27.322297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.413 [2024-07-22 18:13:27.322319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.413 [2024-07-22 18:13:27.322335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62062 for offline analysis/debug. 00:06:15.413 [2024-07-22 18:13:27.322382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.346 18:13:28 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.347 18:13:28 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:16.347 18:13:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:16.347 18:13:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:16.347 18:13:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:16.347 18:13:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:16.347 18:13:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.347 18:13:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.347 18:13:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.347 ************************************ 00:06:16.347 START TEST rpc_integrity 00:06:16.347 ************************************ 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:16.347 { 00:06:16.347 "name": "Malloc0", 00:06:16.347 "aliases": [ 00:06:16.347 "6e1e4288-5308-44b8-ab70-55a268d756e6" 00:06:16.347 ], 00:06:16.347 "product_name": "Malloc disk", 00:06:16.347 "block_size": 512, 00:06:16.347 "num_blocks": 16384, 00:06:16.347 "uuid": "6e1e4288-5308-44b8-ab70-55a268d756e6", 00:06:16.347 "assigned_rate_limits": { 00:06:16.347 "rw_ios_per_sec": 0, 00:06:16.347 "rw_mbytes_per_sec": 0, 00:06:16.347 "r_mbytes_per_sec": 0, 00:06:16.347 "w_mbytes_per_sec": 0 00:06:16.347 }, 00:06:16.347 "claimed": false, 00:06:16.347 "zoned": false, 00:06:16.347 "supported_io_types": { 00:06:16.347 "read": true, 00:06:16.347 "write": true, 00:06:16.347 "unmap": true, 00:06:16.347 "flush": true, 00:06:16.347 "reset": true, 00:06:16.347 "nvme_admin": false, 00:06:16.347 "nvme_io": false, 00:06:16.347 "nvme_io_md": false, 00:06:16.347 "write_zeroes": true, 00:06:16.347 "zcopy": true, 00:06:16.347 "get_zone_info": false, 00:06:16.347 "zone_management": false, 00:06:16.347 "zone_append": false, 00:06:16.347 "compare": false, 00:06:16.347 "compare_and_write": false, 00:06:16.347 "abort": true, 00:06:16.347 "seek_hole": false, 00:06:16.347 "seek_data": false, 00:06:16.347 "copy": true, 00:06:16.347 "nvme_iov_md": false 00:06:16.347 }, 00:06:16.347 "memory_domains": [ 00:06:16.347 { 00:06:16.347 "dma_device_id": "system", 00:06:16.347 "dma_device_type": 1 00:06:16.347 }, 00:06:16.347 { 00:06:16.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.347 "dma_device_type": 2 00:06:16.347 } 00:06:16.347 ], 00:06:16.347 "driver_specific": {} 00:06:16.347 } 00:06:16.347 ]' 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.347 [2024-07-22 18:13:28.353353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:16.347 [2024-07-22 18:13:28.353456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.347 [2024-07-22 18:13:28.353503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:16.347 [2024-07-22 18:13:28.353528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.347 [2024-07-22 18:13:28.357186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.347 [2024-07-22 18:13:28.357269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:16.347 Passthru0 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.347 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.347 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.605 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.605 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:16.605 { 00:06:16.605 "name": "Malloc0", 00:06:16.606 "aliases": [ 00:06:16.606 "6e1e4288-5308-44b8-ab70-55a268d756e6" 00:06:16.606 ], 00:06:16.606 "product_name": "Malloc disk", 00:06:16.606 "block_size": 512, 00:06:16.606 "num_blocks": 16384, 00:06:16.606 "uuid": "6e1e4288-5308-44b8-ab70-55a268d756e6", 00:06:16.606 "assigned_rate_limits": { 00:06:16.606 "rw_ios_per_sec": 0, 00:06:16.606 "rw_mbytes_per_sec": 0, 00:06:16.606 "r_mbytes_per_sec": 0, 00:06:16.606 "w_mbytes_per_sec": 0 00:06:16.606 }, 00:06:16.606 "claimed": true, 00:06:16.606 "claim_type": "exclusive_write", 00:06:16.606 "zoned": false, 00:06:16.606 "supported_io_types": { 00:06:16.606 "read": true, 00:06:16.606 "write": true, 00:06:16.606 "unmap": true, 00:06:16.606 "flush": true, 00:06:16.606 "reset": true, 00:06:16.606 "nvme_admin": false, 00:06:16.606 "nvme_io": false, 00:06:16.606 "nvme_io_md": false, 00:06:16.606 "write_zeroes": true, 00:06:16.606 "zcopy": true, 00:06:16.606 "get_zone_info": false, 00:06:16.606 "zone_management": false, 00:06:16.606 "zone_append": false, 00:06:16.606 "compare": false, 00:06:16.606 "compare_and_write": false, 00:06:16.606 "abort": true, 00:06:16.606 "seek_hole": false, 00:06:16.606 "seek_data": false, 00:06:16.606 "copy": true, 00:06:16.606 "nvme_iov_md": false 00:06:16.606 }, 00:06:16.606 "memory_domains": [ 00:06:16.606 { 00:06:16.606 "dma_device_id": "system", 00:06:16.606 "dma_device_type": 1 00:06:16.606 }, 00:06:16.606 { 00:06:16.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.606 "dma_device_type": 2 00:06:16.606 } 00:06:16.606 ], 00:06:16.606 "driver_specific": {} 00:06:16.606 }, 00:06:16.606 { 00:06:16.606 "name": "Passthru0", 00:06:16.606 "aliases": [ 00:06:16.606 "0bd0ba15-7b7d-5a53-be20-5f0630551cf1" 00:06:16.606 ], 00:06:16.606 "product_name": "passthru", 00:06:16.606 "block_size": 512, 00:06:16.606 "num_blocks": 16384, 00:06:16.606 "uuid": "0bd0ba15-7b7d-5a53-be20-5f0630551cf1", 00:06:16.606 "assigned_rate_limits": { 00:06:16.606 "rw_ios_per_sec": 0, 00:06:16.606 "rw_mbytes_per_sec": 0, 00:06:16.606 "r_mbytes_per_sec": 0, 00:06:16.606 "w_mbytes_per_sec": 0 00:06:16.606 }, 00:06:16.606 "claimed": false, 00:06:16.606 "zoned": false, 00:06:16.606 "supported_io_types": { 00:06:16.606 "read": true, 00:06:16.606 "write": true, 00:06:16.606 "unmap": true, 00:06:16.606 "flush": true, 00:06:16.606 "reset": true, 00:06:16.606 "nvme_admin": false, 00:06:16.606 "nvme_io": false, 00:06:16.606 "nvme_io_md": false, 00:06:16.606 "write_zeroes": true, 00:06:16.606 "zcopy": true, 00:06:16.606 "get_zone_info": false, 00:06:16.606 "zone_management": false, 00:06:16.606 "zone_append": false, 00:06:16.606 "compare": false, 00:06:16.606 "compare_and_write": false, 00:06:16.606 "abort": true, 00:06:16.606 "seek_hole": false, 00:06:16.606 "seek_data": false, 00:06:16.606 "copy": true, 00:06:16.606 "nvme_iov_md": false 00:06:16.606 }, 00:06:16.606 "memory_domains": [ 00:06:16.606 { 00:06:16.606 "dma_device_id": "system", 00:06:16.606 "dma_device_type": 1 00:06:16.606 }, 00:06:16.606 { 00:06:16.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.606 "dma_device_type": 2 00:06:16.606 } 00:06:16.606 ], 00:06:16.606 "driver_specific": { 00:06:16.606 "passthru": { 00:06:16.606 "name": "Passthru0", 00:06:16.606 "base_bdev_name": "Malloc0" 00:06:16.606 } 00:06:16.606 } 00:06:16.606 } 00:06:16.606 ]' 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:16.606 ************************************ 00:06:16.606 END TEST rpc_integrity 00:06:16.606 ************************************ 00:06:16.606 18:13:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:16.606 00:06:16.606 real 0m0.377s 00:06:16.606 user 0m0.225s 00:06:16.606 sys 0m0.048s 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.606 18:13:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.606 18:13:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.606 18:13:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:16.606 18:13:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.606 18:13:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.606 18:13:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.606 ************************************ 00:06:16.606 START TEST rpc_plugins 00:06:16.606 ************************************ 00:06:16.606 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:16.606 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:16.606 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.606 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.864 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.864 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:16.865 { 00:06:16.865 "name": "Malloc1", 00:06:16.865 "aliases": [ 00:06:16.865 "7e504b18-3fa0-4e7d-94f9-a8e95dc7ab94" 00:06:16.865 ], 00:06:16.865 "product_name": "Malloc disk", 00:06:16.865 "block_size": 4096, 00:06:16.865 "num_blocks": 256, 00:06:16.865 "uuid": "7e504b18-3fa0-4e7d-94f9-a8e95dc7ab94", 00:06:16.865 "assigned_rate_limits": { 00:06:16.865 "rw_ios_per_sec": 0, 00:06:16.865 "rw_mbytes_per_sec": 0, 00:06:16.865 "r_mbytes_per_sec": 0, 00:06:16.865 "w_mbytes_per_sec": 0 00:06:16.865 }, 00:06:16.865 "claimed": false, 00:06:16.865 "zoned": false, 00:06:16.865 "supported_io_types": { 00:06:16.865 "read": true, 00:06:16.865 "write": true, 00:06:16.865 "unmap": true, 00:06:16.865 "flush": true, 00:06:16.865 "reset": true, 00:06:16.865 "nvme_admin": false, 00:06:16.865 "nvme_io": false, 00:06:16.865 "nvme_io_md": false, 00:06:16.865 "write_zeroes": true, 00:06:16.865 "zcopy": true, 00:06:16.865 "get_zone_info": false, 00:06:16.865 "zone_management": false, 00:06:16.865 "zone_append": false, 00:06:16.865 "compare": false, 00:06:16.865 "compare_and_write": false, 00:06:16.865 "abort": true, 00:06:16.865 "seek_hole": false, 00:06:16.865 "seek_data": false, 00:06:16.865 "copy": true, 00:06:16.865 "nvme_iov_md": false 00:06:16.865 }, 00:06:16.865 "memory_domains": [ 00:06:16.865 { 00:06:16.865 "dma_device_id": "system", 00:06:16.865 "dma_device_type": 1 00:06:16.865 }, 00:06:16.865 { 00:06:16.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.865 "dma_device_type": 2 00:06:16.865 } 00:06:16.865 ], 00:06:16.865 "driver_specific": {} 00:06:16.865 } 00:06:16.865 ]' 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:16.865 ************************************ 00:06:16.865 END TEST rpc_plugins 00:06:16.865 ************************************ 00:06:16.865 18:13:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:16.865 00:06:16.865 real 0m0.164s 00:06:16.865 user 0m0.108s 00:06:16.865 sys 0m0.017s 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.865 18:13:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.865 18:13:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.865 18:13:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:16.865 18:13:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.865 18:13:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.865 18:13:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.865 ************************************ 00:06:16.865 START TEST rpc_trace_cmd_test 00:06:16.865 ************************************ 00:06:16.865 18:13:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:16.865 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:16.865 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:16.865 18:13:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.865 18:13:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.865 18:13:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.865 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:16.865 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62062", 00:06:16.865 "tpoint_group_mask": "0x8", 00:06:16.865 "iscsi_conn": { 00:06:16.865 "mask": "0x2", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "scsi": { 00:06:16.865 "mask": "0x4", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "bdev": { 00:06:16.865 "mask": "0x8", 00:06:16.865 "tpoint_mask": "0xffffffffffffffff" 00:06:16.865 }, 00:06:16.865 "nvmf_rdma": { 00:06:16.865 "mask": "0x10", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "nvmf_tcp": { 00:06:16.865 "mask": "0x20", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "ftl": { 00:06:16.865 "mask": "0x40", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "blobfs": { 00:06:16.865 "mask": "0x80", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "dsa": { 00:06:16.865 "mask": "0x200", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "thread": { 00:06:16.865 "mask": "0x400", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "nvme_pcie": { 00:06:16.865 "mask": "0x800", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "iaa": { 00:06:16.865 "mask": "0x1000", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.865 }, 00:06:16.865 "nvme_tcp": { 00:06:16.865 "mask": "0x2000", 00:06:16.865 "tpoint_mask": "0x0" 00:06:16.866 }, 00:06:16.866 "bdev_nvme": { 00:06:16.866 "mask": "0x4000", 00:06:16.866 "tpoint_mask": "0x0" 00:06:16.866 }, 00:06:16.866 "sock": { 00:06:16.866 "mask": "0x8000", 00:06:16.866 "tpoint_mask": "0x0" 00:06:16.866 } 00:06:16.866 }' 00:06:16.866 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:17.124 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:17.124 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:17.124 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:17.124 18:13:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:17.124 18:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:17.124 18:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:17.124 18:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:17.124 18:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:17.124 ************************************ 00:06:17.124 END TEST rpc_trace_cmd_test 00:06:17.124 ************************************ 00:06:17.124 18:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:17.124 00:06:17.124 real 0m0.293s 00:06:17.124 user 0m0.239s 00:06:17.124 sys 0m0.028s 00:06:17.124 18:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.124 18:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.382 18:13:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:17.382 18:13:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:17.382 18:13:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:17.382 18:13:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:17.382 18:13:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.382 18:13:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.382 18:13:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.382 ************************************ 00:06:17.382 START TEST rpc_daemon_integrity 00:06:17.382 ************************************ 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.382 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:17.383 { 00:06:17.383 "name": "Malloc2", 00:06:17.383 "aliases": [ 00:06:17.383 "246ca6cc-59f6-41f7-9797-e0e5df9569c8" 00:06:17.383 ], 00:06:17.383 "product_name": "Malloc disk", 00:06:17.383 "block_size": 512, 00:06:17.383 "num_blocks": 16384, 00:06:17.383 "uuid": "246ca6cc-59f6-41f7-9797-e0e5df9569c8", 00:06:17.383 "assigned_rate_limits": { 00:06:17.383 "rw_ios_per_sec": 0, 00:06:17.383 "rw_mbytes_per_sec": 0, 00:06:17.383 "r_mbytes_per_sec": 0, 00:06:17.383 "w_mbytes_per_sec": 0 00:06:17.383 }, 00:06:17.383 "claimed": false, 00:06:17.383 "zoned": false, 00:06:17.383 "supported_io_types": { 00:06:17.383 "read": true, 00:06:17.383 "write": true, 00:06:17.383 "unmap": true, 00:06:17.383 "flush": true, 00:06:17.383 "reset": true, 00:06:17.383 "nvme_admin": false, 00:06:17.383 "nvme_io": false, 00:06:17.383 "nvme_io_md": false, 00:06:17.383 "write_zeroes": true, 00:06:17.383 "zcopy": true, 00:06:17.383 "get_zone_info": false, 00:06:17.383 "zone_management": false, 00:06:17.383 "zone_append": false, 00:06:17.383 "compare": false, 00:06:17.383 "compare_and_write": false, 00:06:17.383 "abort": true, 00:06:17.383 "seek_hole": false, 00:06:17.383 "seek_data": false, 00:06:17.383 "copy": true, 00:06:17.383 "nvme_iov_md": false 00:06:17.383 }, 00:06:17.383 "memory_domains": [ 00:06:17.383 { 00:06:17.383 "dma_device_id": "system", 00:06:17.383 "dma_device_type": 1 00:06:17.383 }, 00:06:17.383 { 00:06:17.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.383 "dma_device_type": 2 00:06:17.383 } 00:06:17.383 ], 00:06:17.383 "driver_specific": {} 00:06:17.383 } 00:06:17.383 ]' 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.383 [2024-07-22 18:13:29.347838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:17.383 [2024-07-22 18:13:29.347929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.383 [2024-07-22 18:13:29.347967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:17.383 [2024-07-22 18:13:29.347983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.383 [2024-07-22 18:13:29.350876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.383 [2024-07-22 18:13:29.350924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:17.383 Passthru0 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:17.383 { 00:06:17.383 "name": "Malloc2", 00:06:17.383 "aliases": [ 00:06:17.383 "246ca6cc-59f6-41f7-9797-e0e5df9569c8" 00:06:17.383 ], 00:06:17.383 "product_name": "Malloc disk", 00:06:17.383 "block_size": 512, 00:06:17.383 "num_blocks": 16384, 00:06:17.383 "uuid": "246ca6cc-59f6-41f7-9797-e0e5df9569c8", 00:06:17.383 "assigned_rate_limits": { 00:06:17.383 "rw_ios_per_sec": 0, 00:06:17.383 "rw_mbytes_per_sec": 0, 00:06:17.383 "r_mbytes_per_sec": 0, 00:06:17.383 "w_mbytes_per_sec": 0 00:06:17.383 }, 00:06:17.383 "claimed": true, 00:06:17.383 "claim_type": "exclusive_write", 00:06:17.383 "zoned": false, 00:06:17.383 "supported_io_types": { 00:06:17.383 "read": true, 00:06:17.383 "write": true, 00:06:17.383 "unmap": true, 00:06:17.383 "flush": true, 00:06:17.383 "reset": true, 00:06:17.383 "nvme_admin": false, 00:06:17.383 "nvme_io": false, 00:06:17.383 "nvme_io_md": false, 00:06:17.383 "write_zeroes": true, 00:06:17.383 "zcopy": true, 00:06:17.383 "get_zone_info": false, 00:06:17.383 "zone_management": false, 00:06:17.383 "zone_append": false, 00:06:17.383 "compare": false, 00:06:17.383 "compare_and_write": false, 00:06:17.383 "abort": true, 00:06:17.383 "seek_hole": false, 00:06:17.383 "seek_data": false, 00:06:17.383 "copy": true, 00:06:17.383 "nvme_iov_md": false 00:06:17.383 }, 00:06:17.383 "memory_domains": [ 00:06:17.383 { 00:06:17.383 "dma_device_id": "system", 00:06:17.383 "dma_device_type": 1 00:06:17.383 }, 00:06:17.383 { 00:06:17.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.383 "dma_device_type": 2 00:06:17.383 } 00:06:17.383 ], 00:06:17.383 "driver_specific": {} 00:06:17.383 }, 00:06:17.383 { 00:06:17.383 "name": "Passthru0", 00:06:17.383 "aliases": [ 00:06:17.383 "811b6121-aa3e-56d0-ab25-fce4e728142d" 00:06:17.383 ], 00:06:17.383 "product_name": "passthru", 00:06:17.383 "block_size": 512, 00:06:17.383 "num_blocks": 16384, 00:06:17.383 "uuid": "811b6121-aa3e-56d0-ab25-fce4e728142d", 00:06:17.383 "assigned_rate_limits": { 00:06:17.383 "rw_ios_per_sec": 0, 00:06:17.383 "rw_mbytes_per_sec": 0, 00:06:17.383 "r_mbytes_per_sec": 0, 00:06:17.383 "w_mbytes_per_sec": 0 00:06:17.383 }, 00:06:17.383 "claimed": false, 00:06:17.383 "zoned": false, 00:06:17.383 "supported_io_types": { 00:06:17.383 "read": true, 00:06:17.383 "write": true, 00:06:17.383 "unmap": true, 00:06:17.383 "flush": true, 00:06:17.383 "reset": true, 00:06:17.383 "nvme_admin": false, 00:06:17.383 "nvme_io": false, 00:06:17.383 "nvme_io_md": false, 00:06:17.383 "write_zeroes": true, 00:06:17.383 "zcopy": true, 00:06:17.383 "get_zone_info": false, 00:06:17.383 "zone_management": false, 00:06:17.383 "zone_append": false, 00:06:17.383 "compare": false, 00:06:17.383 "compare_and_write": false, 00:06:17.383 "abort": true, 00:06:17.383 "seek_hole": false, 00:06:17.383 "seek_data": false, 00:06:17.383 "copy": true, 00:06:17.383 "nvme_iov_md": false 00:06:17.383 }, 00:06:17.383 "memory_domains": [ 00:06:17.383 { 00:06:17.383 "dma_device_id": "system", 00:06:17.383 "dma_device_type": 1 00:06:17.383 }, 00:06:17.383 { 00:06:17.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.383 "dma_device_type": 2 00:06:17.383 } 00:06:17.383 ], 00:06:17.383 "driver_specific": { 00:06:17.383 "passthru": { 00:06:17.383 "name": "Passthru0", 00:06:17.383 "base_bdev_name": "Malloc2" 00:06:17.383 } 00:06:17.383 } 00:06:17.383 } 00:06:17.383 ]' 00:06:17.383 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:17.642 ************************************ 00:06:17.642 END TEST rpc_daemon_integrity 00:06:17.642 ************************************ 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:17.642 00:06:17.642 real 0m0.376s 00:06:17.642 user 0m0.229s 00:06:17.642 sys 0m0.047s 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.642 18:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:17.642 18:13:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:17.642 18:13:29 rpc -- rpc/rpc.sh@84 -- # killprocess 62062 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@948 -- # '[' -z 62062 ']' 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@952 -- # kill -0 62062 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@953 -- # uname 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62062 00:06:17.642 killing process with pid 62062 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62062' 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@967 -- # kill 62062 00:06:17.642 18:13:29 rpc -- common/autotest_common.sh@972 -- # wait 62062 00:06:20.174 00:06:20.174 real 0m5.160s 00:06:20.174 user 0m5.897s 00:06:20.174 sys 0m0.858s 00:06:20.174 18:13:31 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.174 ************************************ 00:06:20.174 END TEST rpc 00:06:20.174 ************************************ 00:06:20.174 18:13:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.174 18:13:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.174 18:13:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:20.174 18:13:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.174 18:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.174 18:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:20.174 ************************************ 00:06:20.174 START TEST skip_rpc 00:06:20.174 ************************************ 00:06:20.174 18:13:31 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:20.174 * Looking for test storage... 00:06:20.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.174 18:13:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:20.174 18:13:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.174 18:13:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:20.174 18:13:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.174 18:13:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.174 18:13:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.174 ************************************ 00:06:20.174 START TEST skip_rpc 00:06:20.174 ************************************ 00:06:20.174 18:13:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:20.174 18:13:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62283 00:06:20.174 18:13:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:20.174 18:13:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.174 18:13:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:20.174 [2024-07-22 18:13:32.140639] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:20.174 [2024-07-22 18:13:32.140847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62283 ] 00:06:20.432 [2024-07-22 18:13:32.319172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.691 [2024-07-22 18:13:32.615912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62283 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62283 ']' 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62283 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62283 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.957 killing process with pid 62283 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62283' 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62283 00:06:25.957 18:13:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62283 00:06:27.860 ************************************ 00:06:27.860 END TEST skip_rpc 00:06:27.860 ************************************ 00:06:27.860 00:06:27.860 real 0m7.348s 00:06:27.860 user 0m6.763s 00:06:27.860 sys 0m0.471s 00:06:27.860 18:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.860 18:13:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.860 18:13:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:27.860 18:13:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:27.860 18:13:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.860 18:13:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.860 18:13:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.860 ************************************ 00:06:27.860 START TEST skip_rpc_with_json 00:06:27.860 ************************************ 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:27.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62388 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62388 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62388 ']' 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.860 18:13:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.860 [2024-07-22 18:13:39.527716] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:27.860 [2024-07-22 18:13:39.527903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62388 ] 00:06:27.860 [2024-07-22 18:13:39.703559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.119 [2024-07-22 18:13:39.963255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.055 [2024-07-22 18:13:40.755159] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:29.055 request: 00:06:29.055 { 00:06:29.055 "trtype": "tcp", 00:06:29.055 "method": "nvmf_get_transports", 00:06:29.055 "req_id": 1 00:06:29.055 } 00:06:29.055 Got JSON-RPC error response 00:06:29.055 response: 00:06:29.055 { 00:06:29.055 "code": -19, 00:06:29.055 "message": "No such device" 00:06:29.055 } 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.055 [2024-07-22 18:13:40.767298] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.055 18:13:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:29.055 { 00:06:29.055 "subsystems": [ 00:06:29.055 { 00:06:29.055 "subsystem": "keyring", 00:06:29.055 "config": [] 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "subsystem": "iobuf", 00:06:29.055 "config": [ 00:06:29.055 { 00:06:29.055 "method": "iobuf_set_options", 00:06:29.055 "params": { 00:06:29.055 "small_pool_count": 8192, 00:06:29.055 "large_pool_count": 1024, 00:06:29.055 "small_bufsize": 8192, 00:06:29.055 "large_bufsize": 135168 00:06:29.055 } 00:06:29.055 } 00:06:29.055 ] 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "subsystem": "sock", 00:06:29.055 "config": [ 00:06:29.055 { 00:06:29.055 "method": "sock_set_default_impl", 00:06:29.055 "params": { 00:06:29.055 "impl_name": "posix" 00:06:29.055 } 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "method": "sock_impl_set_options", 00:06:29.055 "params": { 00:06:29.055 "impl_name": "ssl", 00:06:29.055 "recv_buf_size": 4096, 00:06:29.055 "send_buf_size": 4096, 00:06:29.055 "enable_recv_pipe": true, 00:06:29.055 "enable_quickack": false, 00:06:29.055 "enable_placement_id": 0, 00:06:29.055 "enable_zerocopy_send_server": true, 00:06:29.055 "enable_zerocopy_send_client": false, 00:06:29.055 "zerocopy_threshold": 0, 00:06:29.055 "tls_version": 0, 00:06:29.055 "enable_ktls": false 00:06:29.055 } 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "method": "sock_impl_set_options", 00:06:29.055 "params": { 00:06:29.055 "impl_name": "posix", 00:06:29.055 "recv_buf_size": 2097152, 00:06:29.055 "send_buf_size": 2097152, 00:06:29.055 "enable_recv_pipe": true, 00:06:29.055 "enable_quickack": false, 00:06:29.055 "enable_placement_id": 0, 00:06:29.055 "enable_zerocopy_send_server": true, 00:06:29.055 "enable_zerocopy_send_client": false, 00:06:29.055 "zerocopy_threshold": 0, 00:06:29.055 "tls_version": 0, 00:06:29.055 "enable_ktls": false 00:06:29.055 } 00:06:29.055 } 00:06:29.055 ] 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "subsystem": "vmd", 00:06:29.055 "config": [] 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "subsystem": "accel", 00:06:29.055 "config": [ 00:06:29.055 { 00:06:29.055 "method": "accel_set_options", 00:06:29.055 "params": { 00:06:29.055 "small_cache_size": 128, 00:06:29.055 "large_cache_size": 16, 00:06:29.055 "task_count": 2048, 00:06:29.055 "sequence_count": 2048, 00:06:29.055 "buf_count": 2048 00:06:29.055 } 00:06:29.055 } 00:06:29.055 ] 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "subsystem": "bdev", 00:06:29.055 "config": [ 00:06:29.055 { 00:06:29.055 "method": "bdev_set_options", 00:06:29.055 "params": { 00:06:29.055 "bdev_io_pool_size": 65535, 00:06:29.055 "bdev_io_cache_size": 256, 00:06:29.055 "bdev_auto_examine": true, 00:06:29.055 "iobuf_small_cache_size": 128, 00:06:29.055 "iobuf_large_cache_size": 16 00:06:29.055 } 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "method": "bdev_raid_set_options", 00:06:29.055 "params": { 00:06:29.055 "process_window_size_kb": 1024, 00:06:29.055 "process_max_bandwidth_mb_sec": 0 00:06:29.055 } 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "method": "bdev_iscsi_set_options", 00:06:29.055 "params": { 00:06:29.055 "timeout_sec": 30 00:06:29.055 } 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "method": "bdev_nvme_set_options", 00:06:29.055 "params": { 00:06:29.055 "action_on_timeout": "none", 00:06:29.055 "timeout_us": 0, 00:06:29.055 "timeout_admin_us": 0, 00:06:29.055 "keep_alive_timeout_ms": 10000, 00:06:29.055 "arbitration_burst": 0, 00:06:29.055 "low_priority_weight": 0, 00:06:29.055 "medium_priority_weight": 0, 00:06:29.055 "high_priority_weight": 0, 00:06:29.055 "nvme_adminq_poll_period_us": 10000, 00:06:29.055 "nvme_ioq_poll_period_us": 0, 00:06:29.055 "io_queue_requests": 0, 00:06:29.055 "delay_cmd_submit": true, 00:06:29.055 "transport_retry_count": 4, 00:06:29.055 "bdev_retry_count": 3, 00:06:29.055 "transport_ack_timeout": 0, 00:06:29.055 "ctrlr_loss_timeout_sec": 0, 00:06:29.055 "reconnect_delay_sec": 0, 00:06:29.055 "fast_io_fail_timeout_sec": 0, 00:06:29.055 "disable_auto_failback": false, 00:06:29.055 "generate_uuids": false, 00:06:29.055 "transport_tos": 0, 00:06:29.055 "nvme_error_stat": false, 00:06:29.055 "rdma_srq_size": 0, 00:06:29.055 "io_path_stat": false, 00:06:29.055 "allow_accel_sequence": false, 00:06:29.055 "rdma_max_cq_size": 0, 00:06:29.055 "rdma_cm_event_timeout_ms": 0, 00:06:29.055 "dhchap_digests": [ 00:06:29.055 "sha256", 00:06:29.055 "sha384", 00:06:29.055 "sha512" 00:06:29.055 ], 00:06:29.055 "dhchap_dhgroups": [ 00:06:29.055 "null", 00:06:29.055 "ffdhe2048", 00:06:29.055 "ffdhe3072", 00:06:29.055 "ffdhe4096", 00:06:29.055 "ffdhe6144", 00:06:29.055 "ffdhe8192" 00:06:29.055 ] 00:06:29.055 } 00:06:29.055 }, 00:06:29.055 { 00:06:29.055 "method": "bdev_nvme_set_hotplug", 00:06:29.055 "params": { 00:06:29.055 "period_us": 100000, 00:06:29.056 "enable": false 00:06:29.056 } 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "method": "bdev_wait_for_examine" 00:06:29.056 } 00:06:29.056 ] 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "scsi", 00:06:29.056 "config": null 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "scheduler", 00:06:29.056 "config": [ 00:06:29.056 { 00:06:29.056 "method": "framework_set_scheduler", 00:06:29.056 "params": { 00:06:29.056 "name": "static" 00:06:29.056 } 00:06:29.056 } 00:06:29.056 ] 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "vhost_scsi", 00:06:29.056 "config": [] 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "vhost_blk", 00:06:29.056 "config": [] 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "ublk", 00:06:29.056 "config": [] 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "nbd", 00:06:29.056 "config": [] 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "nvmf", 00:06:29.056 "config": [ 00:06:29.056 { 00:06:29.056 "method": "nvmf_set_config", 00:06:29.056 "params": { 00:06:29.056 "discovery_filter": "match_any", 00:06:29.056 "admin_cmd_passthru": { 00:06:29.056 "identify_ctrlr": false 00:06:29.056 } 00:06:29.056 } 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "method": "nvmf_set_max_subsystems", 00:06:29.056 "params": { 00:06:29.056 "max_subsystems": 1024 00:06:29.056 } 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "method": "nvmf_set_crdt", 00:06:29.056 "params": { 00:06:29.056 "crdt1": 0, 00:06:29.056 "crdt2": 0, 00:06:29.056 "crdt3": 0 00:06:29.056 } 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "method": "nvmf_create_transport", 00:06:29.056 "params": { 00:06:29.056 "trtype": "TCP", 00:06:29.056 "max_queue_depth": 128, 00:06:29.056 "max_io_qpairs_per_ctrlr": 127, 00:06:29.056 "in_capsule_data_size": 4096, 00:06:29.056 "max_io_size": 131072, 00:06:29.056 "io_unit_size": 131072, 00:06:29.056 "max_aq_depth": 128, 00:06:29.056 "num_shared_buffers": 511, 00:06:29.056 "buf_cache_size": 4294967295, 00:06:29.056 "dif_insert_or_strip": false, 00:06:29.056 "zcopy": false, 00:06:29.056 "c2h_success": true, 00:06:29.056 "sock_priority": 0, 00:06:29.056 "abort_timeout_sec": 1, 00:06:29.056 "ack_timeout": 0, 00:06:29.056 "data_wr_pool_size": 0 00:06:29.056 } 00:06:29.056 } 00:06:29.056 ] 00:06:29.056 }, 00:06:29.056 { 00:06:29.056 "subsystem": "iscsi", 00:06:29.056 "config": [ 00:06:29.056 { 00:06:29.056 "method": "iscsi_set_options", 00:06:29.056 "params": { 00:06:29.056 "node_base": "iqn.2016-06.io.spdk", 00:06:29.056 "max_sessions": 128, 00:06:29.056 "max_connections_per_session": 2, 00:06:29.056 "max_queue_depth": 64, 00:06:29.056 "default_time2wait": 2, 00:06:29.056 "default_time2retain": 20, 00:06:29.056 "first_burst_length": 8192, 00:06:29.056 "immediate_data": true, 00:06:29.056 "allow_duplicated_isid": false, 00:06:29.056 "error_recovery_level": 0, 00:06:29.056 "nop_timeout": 60, 00:06:29.056 "nop_in_interval": 30, 00:06:29.056 "disable_chap": false, 00:06:29.056 "require_chap": false, 00:06:29.056 "mutual_chap": false, 00:06:29.056 "chap_group": 0, 00:06:29.056 "max_large_datain_per_connection": 64, 00:06:29.056 "max_r2t_per_connection": 4, 00:06:29.056 "pdu_pool_size": 36864, 00:06:29.056 "immediate_data_pool_size": 16384, 00:06:29.056 "data_out_pool_size": 2048 00:06:29.056 } 00:06:29.056 } 00:06:29.056 ] 00:06:29.056 } 00:06:29.056 ] 00:06:29.056 } 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62388 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62388 ']' 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62388 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62388 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.056 killing process with pid 62388 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62388' 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62388 00:06:29.056 18:13:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62388 00:06:31.590 18:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62439 00:06:31.590 18:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:31.590 18:13:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62439 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62439 ']' 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62439 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62439 00:06:36.862 killing process with pid 62439 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62439' 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62439 00:06:36.862 18:13:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62439 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:38.767 ************************************ 00:06:38.767 END TEST skip_rpc_with_json 00:06:38.767 ************************************ 00:06:38.767 00:06:38.767 real 0m10.968s 00:06:38.767 user 0m10.351s 00:06:38.767 sys 0m0.980s 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:38.767 18:13:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:38.767 18:13:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:38.767 18:13:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.767 18:13:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.767 18:13:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.767 ************************************ 00:06:38.767 START TEST skip_rpc_with_delay 00:06:38.767 ************************************ 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:38.767 [2024-07-22 18:13:50.533330] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:38.767 [2024-07-22 18:13:50.533477] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.767 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.767 00:06:38.767 real 0m0.160s 00:06:38.767 user 0m0.089s 00:06:38.768 sys 0m0.071s 00:06:38.768 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.768 ************************************ 00:06:38.768 END TEST skip_rpc_with_delay 00:06:38.768 ************************************ 00:06:38.768 18:13:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 18:13:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:38.768 18:13:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:38.768 18:13:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:38.768 18:13:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:38.768 18:13:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.768 18:13:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.768 18:13:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 ************************************ 00:06:38.768 START TEST exit_on_failed_rpc_init 00:06:38.768 ************************************ 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62567 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62567 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62567 ']' 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.768 18:13:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 [2024-07-22 18:13:50.769526] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:38.768 [2024-07-22 18:13:50.769750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62567 ] 00:06:39.026 [2024-07-22 18:13:50.947666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.285 [2024-07-22 18:13:51.210331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.220 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.221 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.221 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.221 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.221 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.221 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.221 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:40.221 18:13:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.221 [2024-07-22 18:13:52.137609] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:40.221 [2024-07-22 18:13:52.137833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62596 ] 00:06:40.479 [2024-07-22 18:13:52.317087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.738 [2024-07-22 18:13:52.582131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.738 [2024-07-22 18:13:52.582279] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:40.738 [2024-07-22 18:13:52.582306] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:40.738 [2024-07-22 18:13:52.582324] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62567 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62567 ']' 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62567 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62567 00:06:41.305 killing process with pid 62567 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62567' 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62567 00:06:41.305 18:13:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62567 00:06:43.840 ************************************ 00:06:43.840 END TEST exit_on_failed_rpc_init 00:06:43.840 ************************************ 00:06:43.840 00:06:43.840 real 0m4.586s 00:06:43.840 user 0m5.208s 00:06:43.840 sys 0m0.665s 00:06:43.840 18:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.840 18:13:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:43.840 18:13:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:43.840 18:13:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:43.840 00:06:43.840 real 0m23.363s 00:06:43.840 user 0m22.522s 00:06:43.840 sys 0m2.362s 00:06:43.840 18:13:55 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.840 ************************************ 00:06:43.840 END TEST skip_rpc 00:06:43.840 ************************************ 00:06:43.840 18:13:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.840 18:13:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.840 18:13:55 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:43.840 18:13:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.840 18:13:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.840 18:13:55 -- common/autotest_common.sh@10 -- # set +x 00:06:43.840 ************************************ 00:06:43.840 START TEST rpc_client 00:06:43.840 ************************************ 00:06:43.840 18:13:55 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:43.840 * Looking for test storage... 00:06:43.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:43.840 18:13:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:43.840 OK 00:06:43.840 18:13:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:43.840 00:06:43.840 real 0m0.145s 00:06:43.840 user 0m0.069s 00:06:43.840 sys 0m0.082s 00:06:43.840 18:13:55 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.840 18:13:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:43.840 ************************************ 00:06:43.840 END TEST rpc_client 00:06:43.840 ************************************ 00:06:43.841 18:13:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.841 18:13:55 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:43.841 18:13:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.841 18:13:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.841 18:13:55 -- common/autotest_common.sh@10 -- # set +x 00:06:43.841 ************************************ 00:06:43.841 START TEST json_config 00:06:43.841 ************************************ 00:06:43.841 18:13:55 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bdffaca2-0b18-4758-9ce7-2e5bdb4d40b8 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bdffaca2-0b18-4758-9ce7-2e5bdb4d40b8 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.841 18:13:55 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.841 18:13:55 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.841 18:13:55 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.841 18:13:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config -- paths/export.sh@5 -- # export PATH 00:06:43.841 18:13:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@47 -- # : 0 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.841 18:13:55 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:43.841 WARNING: No tests are enabled so not running JSON configuration tests 00:06:43.841 18:13:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:43.841 00:06:43.841 real 0m0.082s 00:06:43.841 user 0m0.037s 00:06:43.841 sys 0m0.042s 00:06:43.841 18:13:55 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.841 ************************************ 00:06:43.841 END TEST json_config 00:06:43.841 ************************************ 00:06:43.841 18:13:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.841 18:13:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.841 18:13:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.841 18:13:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.841 18:13:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.841 18:13:55 -- common/autotest_common.sh@10 -- # set +x 00:06:43.841 ************************************ 00:06:43.841 START TEST json_config_extra_key 00:06:43.841 ************************************ 00:06:43.841 18:13:55 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.841 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bdffaca2-0b18-4758-9ce7-2e5bdb4d40b8 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bdffaca2-0b18-4758-9ce7-2e5bdb4d40b8 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.841 18:13:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.841 18:13:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.841 18:13:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.841 18:13:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:43.841 18:13:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.841 18:13:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.841 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:43.841 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:43.842 INFO: launching applications... 00:06:43.842 18:13:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62771 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:43.842 Waiting for target to run... 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62771 /var/tmp/spdk_tgt.sock 00:06:43.842 18:13:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.842 18:13:55 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62771 ']' 00:06:43.842 18:13:55 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:43.842 18:13:55 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:43.842 18:13:55 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:43.842 18:13:55 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.842 18:13:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.842 [2024-07-22 18:13:55.808502] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:43.842 [2024-07-22 18:13:55.808865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:06:44.421 [2024-07-22 18:13:56.246283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.678 [2024-07-22 18:13:56.499031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.245 18:13:57 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.245 00:06:45.245 INFO: shutting down applications... 00:06:45.245 18:13:57 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:45.245 18:13:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:45.245 18:13:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62771 ]] 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62771 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62771 00:06:45.245 18:13:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:45.811 18:13:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:45.811 18:13:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.811 18:13:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62771 00:06:45.811 18:13:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.378 18:13:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.378 18:13:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.378 18:13:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62771 00:06:46.378 18:13:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.636 18:13:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.636 18:13:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.636 18:13:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62771 00:06:46.636 18:13:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.203 18:13:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.203 18:13:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.203 18:13:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62771 00:06:47.203 18:13:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.777 18:13:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.777 18:13:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.777 18:13:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62771 00:06:47.777 18:13:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.344 18:14:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.344 18:14:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.344 18:14:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62771 00:06:48.344 SPDK target shutdown done 00:06:48.344 Success 00:06:48.344 18:14:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:48.344 18:14:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:48.344 18:14:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:48.344 18:14:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:48.344 18:14:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:48.344 ************************************ 00:06:48.344 END TEST json_config_extra_key 00:06:48.344 ************************************ 00:06:48.344 00:06:48.344 real 0m4.510s 00:06:48.344 user 0m3.973s 00:06:48.344 sys 0m0.578s 00:06:48.344 18:14:00 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.344 18:14:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 18:14:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.344 18:14:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.344 18:14:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.344 18:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.344 18:14:00 -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 ************************************ 00:06:48.344 START TEST alias_rpc 00:06:48.344 ************************************ 00:06:48.344 18:14:00 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.344 * Looking for test storage... 00:06:48.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:48.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.344 18:14:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.344 18:14:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62880 00:06:48.344 18:14:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.344 18:14:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62880 00:06:48.344 18:14:00 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62880 ']' 00:06:48.344 18:14:00 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.344 18:14:00 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.344 18:14:00 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.344 18:14:00 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.344 18:14:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.602 [2024-07-22 18:14:00.370771] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:48.602 [2024-07-22 18:14:00.371147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62880 ] 00:06:48.602 [2024-07-22 18:14:00.539592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.861 [2024-07-22 18:14:00.798724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.795 18:14:01 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.795 18:14:01 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.795 18:14:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:50.053 18:14:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62880 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62880 ']' 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62880 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62880 00:06:50.053 killing process with pid 62880 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62880' 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@967 -- # kill 62880 00:06:50.053 18:14:01 alias_rpc -- common/autotest_common.sh@972 -- # wait 62880 00:06:52.644 ************************************ 00:06:52.644 END TEST alias_rpc 00:06:52.644 ************************************ 00:06:52.644 00:06:52.644 real 0m3.993s 00:06:52.644 user 0m4.132s 00:06:52.644 sys 0m0.605s 00:06:52.644 18:14:04 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.644 18:14:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.644 18:14:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.644 18:14:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:52.644 18:14:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:52.644 18:14:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.644 18:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.644 18:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:52.644 ************************************ 00:06:52.644 START TEST spdkcli_tcp 00:06:52.644 ************************************ 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:52.644 * Looking for test storage... 00:06:52.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62974 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62974 00:06:52.644 18:14:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 62974 ']' 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.644 18:14:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.644 [2024-07-22 18:14:04.496644] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:52.644 [2024-07-22 18:14:04.496829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62974 ] 00:06:52.903 [2024-07-22 18:14:04.663387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.161 [2024-07-22 18:14:04.927398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.161 [2024-07-22 18:14:04.927408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.095 18:14:05 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.095 18:14:05 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:54.095 18:14:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62996 00:06:54.095 18:14:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:54.095 18:14:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:54.095 [ 00:06:54.095 "bdev_malloc_delete", 00:06:54.095 "bdev_malloc_create", 00:06:54.095 "bdev_null_resize", 00:06:54.095 "bdev_null_delete", 00:06:54.095 "bdev_null_create", 00:06:54.095 "bdev_nvme_cuse_unregister", 00:06:54.095 "bdev_nvme_cuse_register", 00:06:54.095 "bdev_opal_new_user", 00:06:54.095 "bdev_opal_set_lock_state", 00:06:54.095 "bdev_opal_delete", 00:06:54.095 "bdev_opal_get_info", 00:06:54.095 "bdev_opal_create", 00:06:54.095 "bdev_nvme_opal_revert", 00:06:54.095 "bdev_nvme_opal_init", 00:06:54.095 "bdev_nvme_send_cmd", 00:06:54.095 "bdev_nvme_get_path_iostat", 00:06:54.095 "bdev_nvme_get_mdns_discovery_info", 00:06:54.095 "bdev_nvme_stop_mdns_discovery", 00:06:54.095 "bdev_nvme_start_mdns_discovery", 00:06:54.095 "bdev_nvme_set_multipath_policy", 00:06:54.095 "bdev_nvme_set_preferred_path", 00:06:54.095 "bdev_nvme_get_io_paths", 00:06:54.095 "bdev_nvme_remove_error_injection", 00:06:54.095 "bdev_nvme_add_error_injection", 00:06:54.095 "bdev_nvme_get_discovery_info", 00:06:54.095 "bdev_nvme_stop_discovery", 00:06:54.095 "bdev_nvme_start_discovery", 00:06:54.095 "bdev_nvme_get_controller_health_info", 00:06:54.095 "bdev_nvme_disable_controller", 00:06:54.095 "bdev_nvme_enable_controller", 00:06:54.095 "bdev_nvme_reset_controller", 00:06:54.095 "bdev_nvme_get_transport_statistics", 00:06:54.095 "bdev_nvme_apply_firmware", 00:06:54.095 "bdev_nvme_detach_controller", 00:06:54.095 "bdev_nvme_get_controllers", 00:06:54.095 "bdev_nvme_attach_controller", 00:06:54.096 "bdev_nvme_set_hotplug", 00:06:54.096 "bdev_nvme_set_options", 00:06:54.096 "bdev_passthru_delete", 00:06:54.096 "bdev_passthru_create", 00:06:54.096 "bdev_lvol_set_parent_bdev", 00:06:54.096 "bdev_lvol_set_parent", 00:06:54.096 "bdev_lvol_check_shallow_copy", 00:06:54.096 "bdev_lvol_start_shallow_copy", 00:06:54.096 "bdev_lvol_grow_lvstore", 00:06:54.096 "bdev_lvol_get_lvols", 00:06:54.096 "bdev_lvol_get_lvstores", 00:06:54.096 "bdev_lvol_delete", 00:06:54.096 "bdev_lvol_set_read_only", 00:06:54.096 "bdev_lvol_resize", 00:06:54.096 "bdev_lvol_decouple_parent", 00:06:54.096 "bdev_lvol_inflate", 00:06:54.096 "bdev_lvol_rename", 00:06:54.096 "bdev_lvol_clone_bdev", 00:06:54.096 "bdev_lvol_clone", 00:06:54.096 "bdev_lvol_snapshot", 00:06:54.096 "bdev_lvol_create", 00:06:54.096 "bdev_lvol_delete_lvstore", 00:06:54.096 "bdev_lvol_rename_lvstore", 00:06:54.096 "bdev_lvol_create_lvstore", 00:06:54.096 "bdev_raid_set_options", 00:06:54.096 "bdev_raid_remove_base_bdev", 00:06:54.096 "bdev_raid_add_base_bdev", 00:06:54.096 "bdev_raid_delete", 00:06:54.096 "bdev_raid_create", 00:06:54.096 "bdev_raid_get_bdevs", 00:06:54.096 "bdev_error_inject_error", 00:06:54.096 "bdev_error_delete", 00:06:54.096 "bdev_error_create", 00:06:54.096 "bdev_split_delete", 00:06:54.096 "bdev_split_create", 00:06:54.096 "bdev_delay_delete", 00:06:54.096 "bdev_delay_create", 00:06:54.096 "bdev_delay_update_latency", 00:06:54.096 "bdev_zone_block_delete", 00:06:54.096 "bdev_zone_block_create", 00:06:54.096 "blobfs_create", 00:06:54.096 "blobfs_detect", 00:06:54.096 "blobfs_set_cache_size", 00:06:54.096 "bdev_xnvme_delete", 00:06:54.096 "bdev_xnvme_create", 00:06:54.096 "bdev_aio_delete", 00:06:54.096 "bdev_aio_rescan", 00:06:54.096 "bdev_aio_create", 00:06:54.096 "bdev_ftl_set_property", 00:06:54.096 "bdev_ftl_get_properties", 00:06:54.096 "bdev_ftl_get_stats", 00:06:54.096 "bdev_ftl_unmap", 00:06:54.096 "bdev_ftl_unload", 00:06:54.096 "bdev_ftl_delete", 00:06:54.096 "bdev_ftl_load", 00:06:54.096 "bdev_ftl_create", 00:06:54.096 "bdev_virtio_attach_controller", 00:06:54.096 "bdev_virtio_scsi_get_devices", 00:06:54.096 "bdev_virtio_detach_controller", 00:06:54.096 "bdev_virtio_blk_set_hotplug", 00:06:54.096 "bdev_iscsi_delete", 00:06:54.096 "bdev_iscsi_create", 00:06:54.096 "bdev_iscsi_set_options", 00:06:54.096 "accel_error_inject_error", 00:06:54.096 "ioat_scan_accel_module", 00:06:54.096 "dsa_scan_accel_module", 00:06:54.096 "iaa_scan_accel_module", 00:06:54.096 "keyring_file_remove_key", 00:06:54.096 "keyring_file_add_key", 00:06:54.096 "keyring_linux_set_options", 00:06:54.096 "iscsi_get_histogram", 00:06:54.096 "iscsi_enable_histogram", 00:06:54.096 "iscsi_set_options", 00:06:54.096 "iscsi_get_auth_groups", 00:06:54.096 "iscsi_auth_group_remove_secret", 00:06:54.096 "iscsi_auth_group_add_secret", 00:06:54.096 "iscsi_delete_auth_group", 00:06:54.096 "iscsi_create_auth_group", 00:06:54.096 "iscsi_set_discovery_auth", 00:06:54.096 "iscsi_get_options", 00:06:54.096 "iscsi_target_node_request_logout", 00:06:54.096 "iscsi_target_node_set_redirect", 00:06:54.096 "iscsi_target_node_set_auth", 00:06:54.096 "iscsi_target_node_add_lun", 00:06:54.096 "iscsi_get_stats", 00:06:54.096 "iscsi_get_connections", 00:06:54.096 "iscsi_portal_group_set_auth", 00:06:54.096 "iscsi_start_portal_group", 00:06:54.096 "iscsi_delete_portal_group", 00:06:54.096 "iscsi_create_portal_group", 00:06:54.096 "iscsi_get_portal_groups", 00:06:54.096 "iscsi_delete_target_node", 00:06:54.096 "iscsi_target_node_remove_pg_ig_maps", 00:06:54.096 "iscsi_target_node_add_pg_ig_maps", 00:06:54.096 "iscsi_create_target_node", 00:06:54.096 "iscsi_get_target_nodes", 00:06:54.096 "iscsi_delete_initiator_group", 00:06:54.096 "iscsi_initiator_group_remove_initiators", 00:06:54.096 "iscsi_initiator_group_add_initiators", 00:06:54.096 "iscsi_create_initiator_group", 00:06:54.096 "iscsi_get_initiator_groups", 00:06:54.096 "nvmf_set_crdt", 00:06:54.096 "nvmf_set_config", 00:06:54.096 "nvmf_set_max_subsystems", 00:06:54.096 "nvmf_stop_mdns_prr", 00:06:54.096 "nvmf_publish_mdns_prr", 00:06:54.096 "nvmf_subsystem_get_listeners", 00:06:54.096 "nvmf_subsystem_get_qpairs", 00:06:54.096 "nvmf_subsystem_get_controllers", 00:06:54.096 "nvmf_get_stats", 00:06:54.096 "nvmf_get_transports", 00:06:54.096 "nvmf_create_transport", 00:06:54.096 "nvmf_get_targets", 00:06:54.096 "nvmf_delete_target", 00:06:54.096 "nvmf_create_target", 00:06:54.096 "nvmf_subsystem_allow_any_host", 00:06:54.096 "nvmf_subsystem_remove_host", 00:06:54.096 "nvmf_subsystem_add_host", 00:06:54.096 "nvmf_ns_remove_host", 00:06:54.096 "nvmf_ns_add_host", 00:06:54.096 "nvmf_subsystem_remove_ns", 00:06:54.096 "nvmf_subsystem_add_ns", 00:06:54.096 "nvmf_subsystem_listener_set_ana_state", 00:06:54.096 "nvmf_discovery_get_referrals", 00:06:54.096 "nvmf_discovery_remove_referral", 00:06:54.096 "nvmf_discovery_add_referral", 00:06:54.096 "nvmf_subsystem_remove_listener", 00:06:54.096 "nvmf_subsystem_add_listener", 00:06:54.096 "nvmf_delete_subsystem", 00:06:54.096 "nvmf_create_subsystem", 00:06:54.096 "nvmf_get_subsystems", 00:06:54.096 "env_dpdk_get_mem_stats", 00:06:54.096 "nbd_get_disks", 00:06:54.096 "nbd_stop_disk", 00:06:54.096 "nbd_start_disk", 00:06:54.096 "ublk_recover_disk", 00:06:54.096 "ublk_get_disks", 00:06:54.096 "ublk_stop_disk", 00:06:54.096 "ublk_start_disk", 00:06:54.096 "ublk_destroy_target", 00:06:54.096 "ublk_create_target", 00:06:54.096 "virtio_blk_create_transport", 00:06:54.096 "virtio_blk_get_transports", 00:06:54.096 "vhost_controller_set_coalescing", 00:06:54.096 "vhost_get_controllers", 00:06:54.096 "vhost_delete_controller", 00:06:54.096 "vhost_create_blk_controller", 00:06:54.096 "vhost_scsi_controller_remove_target", 00:06:54.096 "vhost_scsi_controller_add_target", 00:06:54.096 "vhost_start_scsi_controller", 00:06:54.096 "vhost_create_scsi_controller", 00:06:54.096 "thread_set_cpumask", 00:06:54.096 "framework_get_governor", 00:06:54.096 "framework_get_scheduler", 00:06:54.096 "framework_set_scheduler", 00:06:54.096 "framework_get_reactors", 00:06:54.096 "thread_get_io_channels", 00:06:54.096 "thread_get_pollers", 00:06:54.096 "thread_get_stats", 00:06:54.096 "framework_monitor_context_switch", 00:06:54.096 "spdk_kill_instance", 00:06:54.096 "log_enable_timestamps", 00:06:54.096 "log_get_flags", 00:06:54.096 "log_clear_flag", 00:06:54.096 "log_set_flag", 00:06:54.096 "log_get_level", 00:06:54.096 "log_set_level", 00:06:54.096 "log_get_print_level", 00:06:54.096 "log_set_print_level", 00:06:54.096 "framework_enable_cpumask_locks", 00:06:54.096 "framework_disable_cpumask_locks", 00:06:54.096 "framework_wait_init", 00:06:54.096 "framework_start_init", 00:06:54.096 "scsi_get_devices", 00:06:54.096 "bdev_get_histogram", 00:06:54.096 "bdev_enable_histogram", 00:06:54.096 "bdev_set_qos_limit", 00:06:54.096 "bdev_set_qd_sampling_period", 00:06:54.096 "bdev_get_bdevs", 00:06:54.096 "bdev_reset_iostat", 00:06:54.096 "bdev_get_iostat", 00:06:54.096 "bdev_examine", 00:06:54.096 "bdev_wait_for_examine", 00:06:54.096 "bdev_set_options", 00:06:54.096 "notify_get_notifications", 00:06:54.096 "notify_get_types", 00:06:54.096 "accel_get_stats", 00:06:54.096 "accel_set_options", 00:06:54.096 "accel_set_driver", 00:06:54.096 "accel_crypto_key_destroy", 00:06:54.096 "accel_crypto_keys_get", 00:06:54.096 "accel_crypto_key_create", 00:06:54.096 "accel_assign_opc", 00:06:54.096 "accel_get_module_info", 00:06:54.096 "accel_get_opc_assignments", 00:06:54.096 "vmd_rescan", 00:06:54.096 "vmd_remove_device", 00:06:54.096 "vmd_enable", 00:06:54.096 "sock_get_default_impl", 00:06:54.096 "sock_set_default_impl", 00:06:54.096 "sock_impl_set_options", 00:06:54.096 "sock_impl_get_options", 00:06:54.096 "iobuf_get_stats", 00:06:54.096 "iobuf_set_options", 00:06:54.096 "framework_get_pci_devices", 00:06:54.096 "framework_get_config", 00:06:54.096 "framework_get_subsystems", 00:06:54.096 "trace_get_info", 00:06:54.096 "trace_get_tpoint_group_mask", 00:06:54.096 "trace_disable_tpoint_group", 00:06:54.096 "trace_enable_tpoint_group", 00:06:54.096 "trace_clear_tpoint_mask", 00:06:54.096 "trace_set_tpoint_mask", 00:06:54.096 "keyring_get_keys", 00:06:54.096 "spdk_get_version", 00:06:54.096 "rpc_get_methods" 00:06:54.096 ] 00:06:54.096 18:14:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:54.096 18:14:06 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.096 18:14:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.096 18:14:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:54.096 18:14:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62974 00:06:54.096 18:14:06 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 62974 ']' 00:06:54.096 18:14:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 62974 00:06:54.096 18:14:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:54.096 18:14:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.096 18:14:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62974 00:06:54.097 18:14:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.097 killing process with pid 62974 00:06:54.097 18:14:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.097 18:14:06 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62974' 00:06:54.097 18:14:06 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 62974 00:06:54.097 18:14:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 62974 00:06:56.627 ************************************ 00:06:56.627 END TEST spdkcli_tcp 00:06:56.627 ************************************ 00:06:56.627 00:06:56.627 real 0m4.070s 00:06:56.627 user 0m7.175s 00:06:56.627 sys 0m0.630s 00:06:56.627 18:14:08 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.627 18:14:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.627 18:14:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.627 18:14:08 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:56.627 18:14:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.627 18:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.627 18:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:56.627 ************************************ 00:06:56.627 START TEST dpdk_mem_utility 00:06:56.627 ************************************ 00:06:56.627 18:14:08 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:56.627 * Looking for test storage... 00:06:56.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:56.627 18:14:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:56.627 18:14:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63088 00:06:56.627 18:14:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:56.627 18:14:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63088 00:06:56.627 18:14:08 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63088 ']' 00:06:56.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.627 18:14:08 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.627 18:14:08 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.627 18:14:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.627 18:14:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.627 18:14:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:56.627 [2024-07-22 18:14:08.546446] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:56.627 [2024-07-22 18:14:08.546887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63088 ] 00:06:56.886 [2024-07-22 18:14:08.716383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.144 [2024-07-22 18:14:08.986803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.080 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.080 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:58.080 18:14:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:58.080 18:14:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:58.080 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.080 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.080 { 00:06:58.080 "filename": "/tmp/spdk_mem_dump.txt" 00:06:58.080 } 00:06:58.080 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.080 18:14:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:58.080 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:58.080 1 heaps totaling size 820.000000 MiB 00:06:58.080 size: 820.000000 MiB heap id: 0 00:06:58.080 end heaps---------- 00:06:58.080 8 mempools totaling size 598.116089 MiB 00:06:58.080 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:58.080 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:58.080 size: 84.521057 MiB name: bdev_io_63088 00:06:58.080 size: 51.011292 MiB name: evtpool_63088 00:06:58.081 size: 50.003479 MiB name: msgpool_63088 00:06:58.081 size: 21.763794 MiB name: PDU_Pool 00:06:58.081 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:58.081 size: 0.026123 MiB name: Session_Pool 00:06:58.081 end mempools------- 00:06:58.081 6 memzones totaling size 4.142822 MiB 00:06:58.081 size: 1.000366 MiB name: RG_ring_0_63088 00:06:58.081 size: 1.000366 MiB name: RG_ring_1_63088 00:06:58.081 size: 1.000366 MiB name: RG_ring_4_63088 00:06:58.081 size: 1.000366 MiB name: RG_ring_5_63088 00:06:58.081 size: 0.125366 MiB name: RG_ring_2_63088 00:06:58.081 size: 0.015991 MiB name: RG_ring_3_63088 00:06:58.081 end memzones------- 00:06:58.081 18:14:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:58.081 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:06:58.081 list of free elements. size: 18.452271 MiB 00:06:58.081 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:58.081 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:58.081 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:58.081 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:58.081 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:58.081 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:58.081 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:58.081 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:58.081 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:58.081 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:58.081 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:58.081 element at address: 0x200000200000 with size: 0.830200 MiB 00:06:58.081 element at address: 0x20001b000000 with size: 0.564880 MiB 00:06:58.081 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:58.081 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:58.081 element at address: 0x200013800000 with size: 0.467651 MiB 00:06:58.081 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:58.081 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:58.081 list of standard malloc elements. size: 199.283325 MiB 00:06:58.081 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:58.081 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:58.081 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:58.081 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:58.081 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:58.081 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:58.081 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:58.081 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:58.081 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:58.081 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:58.081 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:58.081 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200013877b80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:58.081 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:58.082 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:58.082 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:58.082 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:58.082 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:58.082 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:58.083 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:58.083 list of memzone associated elements. size: 602.264404 MiB 00:06:58.083 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:58.083 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:58.083 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:58.083 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:58.083 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:58.083 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63088_0 00:06:58.083 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:58.083 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63088_0 00:06:58.083 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:58.083 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63088_0 00:06:58.083 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:58.083 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:58.083 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:58.083 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:58.083 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:58.083 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63088 00:06:58.083 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:58.083 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63088 00:06:58.083 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:58.083 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63088 00:06:58.083 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:58.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:58.083 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:58.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:58.083 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:58.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:58.083 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:58.083 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:58.083 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:58.083 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63088 00:06:58.083 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:58.083 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63088 00:06:58.083 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:58.083 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63088 00:06:58.083 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:58.083 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63088 00:06:58.083 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:58.083 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63088 00:06:58.083 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:58.083 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:58.083 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:58.083 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:58.083 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:58.083 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:58.083 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:58.083 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63088 00:06:58.083 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:58.083 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:58.083 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:58.083 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:58.083 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:58.083 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63088 00:06:58.083 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:58.083 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:58.083 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:58.083 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63088 00:06:58.083 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:58.083 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63088 00:06:58.083 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:58.083 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:58.083 18:14:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:58.083 18:14:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63088 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63088 ']' 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63088 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63088 00:06:58.083 killing process with pid 63088 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63088' 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63088 00:06:58.083 18:14:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63088 00:07:00.624 ************************************ 00:07:00.624 END TEST dpdk_mem_utility 00:07:00.624 ************************************ 00:07:00.624 00:07:00.624 real 0m3.830s 00:07:00.624 user 0m3.839s 00:07:00.624 sys 0m0.543s 00:07:00.624 18:14:12 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.624 18:14:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.624 18:14:12 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.624 18:14:12 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:00.624 18:14:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.624 18:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.624 18:14:12 -- common/autotest_common.sh@10 -- # set +x 00:07:00.624 ************************************ 00:07:00.624 START TEST event 00:07:00.624 ************************************ 00:07:00.624 18:14:12 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:00.624 * Looking for test storage... 00:07:00.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:00.624 18:14:12 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.624 18:14:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.624 18:14:12 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.624 18:14:12 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:00.624 18:14:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.624 18:14:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.624 ************************************ 00:07:00.624 START TEST event_perf 00:07:00.624 ************************************ 00:07:00.624 18:14:12 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.624 Running I/O for 1 seconds...[2024-07-22 18:14:12.381010] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:00.624 [2024-07-22 18:14:12.381177] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63188 ] 00:07:00.624 [2024-07-22 18:14:12.558152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.882 [2024-07-22 18:14:12.813777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.882 [2024-07-22 18:14:12.814060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.882 Running I/O for 1 seconds...[2024-07-22 18:14:12.814043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.882 [2024-07-22 18:14:12.813899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.277 00:07:02.277 lcore 0: 189452 00:07:02.277 lcore 1: 189451 00:07:02.277 lcore 2: 189452 00:07:02.277 lcore 3: 189453 00:07:02.277 done. 00:07:02.277 00:07:02.277 ************************************ 00:07:02.277 END TEST event_perf 00:07:02.277 ************************************ 00:07:02.277 real 0m1.887s 00:07:02.277 user 0m4.616s 00:07:02.277 sys 0m0.141s 00:07:02.277 18:14:14 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.277 18:14:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.277 18:14:14 event -- common/autotest_common.sh@1142 -- # return 0 00:07:02.277 18:14:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:02.277 18:14:14 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:02.277 18:14:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.277 18:14:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.277 ************************************ 00:07:02.277 START TEST event_reactor 00:07:02.277 ************************************ 00:07:02.277 18:14:14 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:02.534 [2024-07-22 18:14:14.318794] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:02.534 [2024-07-22 18:14:14.318970] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63227 ] 00:07:02.534 [2024-07-22 18:14:14.496620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.793 [2024-07-22 18:14:14.745578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.216 test_start 00:07:04.216 oneshot 00:07:04.216 tick 100 00:07:04.216 tick 100 00:07:04.216 tick 250 00:07:04.216 tick 100 00:07:04.216 tick 100 00:07:04.216 tick 250 00:07:04.216 tick 500 00:07:04.216 tick 100 00:07:04.216 tick 100 00:07:04.216 tick 100 00:07:04.216 tick 250 00:07:04.216 tick 100 00:07:04.216 tick 100 00:07:04.216 test_end 00:07:04.216 00:07:04.216 real 0m1.867s 00:07:04.216 user 0m1.640s 00:07:04.216 sys 0m0.115s 00:07:04.216 18:14:16 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.216 18:14:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:04.216 ************************************ 00:07:04.216 END TEST event_reactor 00:07:04.216 ************************************ 00:07:04.216 18:14:16 event -- common/autotest_common.sh@1142 -- # return 0 00:07:04.216 18:14:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:04.216 18:14:16 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:04.216 18:14:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.216 18:14:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.217 ************************************ 00:07:04.217 START TEST event_reactor_perf 00:07:04.217 ************************************ 00:07:04.217 18:14:16 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:04.475 [2024-07-22 18:14:16.233120] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:04.475 [2024-07-22 18:14:16.233324] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:07:04.475 [2024-07-22 18:14:16.396825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.733 [2024-07-22 18:14:16.635037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.109 test_start 00:07:06.109 test_end 00:07:06.109 Performance: 284235 events per second 00:07:06.109 00:07:06.109 real 0m1.843s 00:07:06.109 user 0m1.623s 00:07:06.109 sys 0m0.109s 00:07:06.109 18:14:18 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.110 18:14:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.110 ************************************ 00:07:06.110 END TEST event_reactor_perf 00:07:06.110 ************************************ 00:07:06.110 18:14:18 event -- common/autotest_common.sh@1142 -- # return 0 00:07:06.110 18:14:18 event -- event/event.sh@49 -- # uname -s 00:07:06.110 18:14:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:06.110 18:14:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:06.110 18:14:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.110 18:14:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.110 18:14:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.110 ************************************ 00:07:06.110 START TEST event_scheduler 00:07:06.110 ************************************ 00:07:06.110 18:14:18 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:06.398 * Looking for test storage... 00:07:06.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:06.398 18:14:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:06.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.398 18:14:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63337 00:07:06.398 18:14:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.398 18:14:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:06.398 18:14:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63337 00:07:06.398 18:14:18 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63337 ']' 00:07:06.398 18:14:18 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.398 18:14:18 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.398 18:14:18 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.398 18:14:18 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.398 18:14:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.398 [2024-07-22 18:14:18.274136] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:06.398 [2024-07-22 18:14:18.274593] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63337 ] 00:07:06.657 [2024-07-22 18:14:18.451854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.916 [2024-07-22 18:14:18.742415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.916 [2024-07-22 18:14:18.742538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.916 [2024-07-22 18:14:18.742660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.916 [2024-07-22 18:14:18.742734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.174 18:14:19 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.174 18:14:19 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:07.174 18:14:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:07.433 18:14:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.433 18:14:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.433 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:07.433 POWER: Cannot set governor of lcore 0 to userspace 00:07:07.433 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:07.433 POWER: Cannot set governor of lcore 0 to performance 00:07:07.433 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:07.433 POWER: Cannot set governor of lcore 0 to userspace 00:07:07.433 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:07.433 POWER: Cannot set governor of lcore 0 to userspace 00:07:07.433 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:07.433 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:07.433 POWER: Unable to set Power Management Environment for lcore 0 00:07:07.433 [2024-07-22 18:14:19.197997] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:07.433 [2024-07-22 18:14:19.198050] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:07.433 [2024-07-22 18:14:19.198094] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:07.433 [2024-07-22 18:14:19.198146] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:07.433 [2024-07-22 18:14:19.198235] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:07.433 [2024-07-22 18:14:19.198297] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:07.433 18:14:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.433 18:14:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:07.433 18:14:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.433 18:14:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 [2024-07-22 18:14:19.514742] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:07.692 18:14:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:07.692 18:14:19 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.692 18:14:19 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 ************************************ 00:07:07.692 START TEST scheduler_create_thread 00:07:07.692 ************************************ 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 2 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 3 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 4 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 5 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 6 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 7 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 8 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 9 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 10 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.692 18:14:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.629 18:14:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.629 18:14:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:08.629 18:14:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:08.629 18:14:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.629 18:14:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.019 ************************************ 00:07:10.019 END TEST scheduler_create_thread 00:07:10.019 ************************************ 00:07:10.019 18:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.019 00:07:10.019 real 0m2.138s 00:07:10.019 user 0m0.018s 00:07:10.019 sys 0m0.007s 00:07:10.019 18:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.019 18:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:10.019 18:14:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:10.019 18:14:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63337 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63337 ']' 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63337 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63337 00:07:10.019 killing process with pid 63337 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63337' 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63337 00:07:10.019 18:14:21 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63337 00:07:10.278 [2024-07-22 18:14:22.144830] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:11.655 00:07:11.655 real 0m5.249s 00:07:11.655 user 0m8.342s 00:07:11.655 sys 0m0.506s 00:07:11.655 18:14:23 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.655 ************************************ 00:07:11.655 END TEST event_scheduler 00:07:11.655 ************************************ 00:07:11.655 18:14:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 18:14:23 event -- common/autotest_common.sh@1142 -- # return 0 00:07:11.655 18:14:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:11.655 18:14:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:11.655 18:14:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.655 18:14:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.655 18:14:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 ************************************ 00:07:11.655 START TEST app_repeat 00:07:11.655 ************************************ 00:07:11.655 18:14:23 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:11.655 18:14:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:11.656 Process app_repeat pid: 63445 00:07:11.656 spdk_app_start Round 0 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63445 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63445' 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:11.656 18:14:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63445 /var/tmp/spdk-nbd.sock 00:07:11.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.656 18:14:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63445 ']' 00:07:11.656 18:14:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.656 18:14:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.656 18:14:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.656 18:14:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.656 18:14:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.656 [2024-07-22 18:14:23.447265] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:11.656 [2024-07-22 18:14:23.448325] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63445 ] 00:07:11.656 [2024-07-22 18:14:23.613663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.914 [2024-07-22 18:14:23.853335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.914 [2024-07-22 18:14:23.853344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.480 18:14:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.480 18:14:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:12.480 18:14:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.738 Malloc0 00:07:12.996 18:14:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.253 Malloc1 00:07:13.253 18:14:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.253 18:14:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:13.511 /dev/nbd0 00:07:13.511 18:14:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.511 18:14:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.511 1+0 records in 00:07:13.511 1+0 records out 00:07:13.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483004 s, 8.5 MB/s 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:13.511 18:14:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:13.511 18:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.511 18:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.511 18:14:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.769 /dev/nbd1 00:07:13.769 18:14:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.769 18:14:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.769 1+0 records in 00:07:13.769 1+0 records out 00:07:13.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336435 s, 12.2 MB/s 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:13.769 18:14:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:13.769 18:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.769 18:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.769 18:14:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.769 18:14:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.769 18:14:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.028 { 00:07:14.028 "nbd_device": "/dev/nbd0", 00:07:14.028 "bdev_name": "Malloc0" 00:07:14.028 }, 00:07:14.028 { 00:07:14.028 "nbd_device": "/dev/nbd1", 00:07:14.028 "bdev_name": "Malloc1" 00:07:14.028 } 00:07:14.028 ]' 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.028 { 00:07:14.028 "nbd_device": "/dev/nbd0", 00:07:14.028 "bdev_name": "Malloc0" 00:07:14.028 }, 00:07:14.028 { 00:07:14.028 "nbd_device": "/dev/nbd1", 00:07:14.028 "bdev_name": "Malloc1" 00:07:14.028 } 00:07:14.028 ]' 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.028 /dev/nbd1' 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.028 /dev/nbd1' 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:14.028 256+0 records in 00:07:14.028 256+0 records out 00:07:14.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106435 s, 98.5 MB/s 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.028 18:14:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.028 256+0 records in 00:07:14.028 256+0 records out 00:07:14.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288741 s, 36.3 MB/s 00:07:14.028 18:14:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.028 18:14:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.286 256+0 records in 00:07:14.286 256+0 records out 00:07:14.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297096 s, 35.3 MB/s 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.286 18:14:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.545 18:14:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.804 18:14:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.062 18:14:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.062 18:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.062 18:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.063 18:14:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.063 18:14:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.630 18:14:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.007 [2024-07-22 18:14:28.613116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.007 [2024-07-22 18:14:28.852512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.007 [2024-07-22 18:14:28.852521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.285 [2024-07-22 18:14:29.045980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.285 [2024-07-22 18:14:29.046132] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.718 spdk_app_start Round 1 00:07:18.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.718 18:14:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:18.718 18:14:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:18.718 18:14:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63445 /var/tmp/spdk-nbd.sock 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63445 ']' 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.718 18:14:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:18.718 18:14:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.284 Malloc0 00:07:19.284 18:14:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.543 Malloc1 00:07:19.543 18:14:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.543 18:14:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.801 /dev/nbd0 00:07:19.801 18:14:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.801 18:14:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.801 1+0 records in 00:07:19.801 1+0 records out 00:07:19.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807226 s, 5.1 MB/s 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:19.801 18:14:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:19.801 18:14:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.801 18:14:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.801 18:14:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:20.061 /dev/nbd1 00:07:20.061 18:14:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.061 18:14:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.061 1+0 records in 00:07:20.061 1+0 records out 00:07:20.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490011 s, 8.4 MB/s 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:20.061 18:14:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:20.061 18:14:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.061 18:14:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.061 18:14:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.061 18:14:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.061 18:14:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.319 { 00:07:20.319 "nbd_device": "/dev/nbd0", 00:07:20.319 "bdev_name": "Malloc0" 00:07:20.319 }, 00:07:20.319 { 00:07:20.319 "nbd_device": "/dev/nbd1", 00:07:20.319 "bdev_name": "Malloc1" 00:07:20.319 } 00:07:20.319 ]' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.319 { 00:07:20.319 "nbd_device": "/dev/nbd0", 00:07:20.319 "bdev_name": "Malloc0" 00:07:20.319 }, 00:07:20.319 { 00:07:20.319 "nbd_device": "/dev/nbd1", 00:07:20.319 "bdev_name": "Malloc1" 00:07:20.319 } 00:07:20.319 ]' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.319 /dev/nbd1' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.319 /dev/nbd1' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.319 256+0 records in 00:07:20.319 256+0 records out 00:07:20.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107256 s, 97.8 MB/s 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.319 256+0 records in 00:07:20.319 256+0 records out 00:07:20.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289743 s, 36.2 MB/s 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.319 256+0 records in 00:07:20.319 256+0 records out 00:07:20.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303647 s, 34.5 MB/s 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.319 18:14:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.578 18:14:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.837 18:14:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.096 18:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.096 18:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.096 18:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.355 18:14:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.355 18:14:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.612 18:14:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:22.989 [2024-07-22 18:14:34.765316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.989 [2024-07-22 18:14:35.000009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.989 [2024-07-22 18:14:35.000011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.247 [2024-07-22 18:14:35.191873] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.247 [2024-07-22 18:14:35.191989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.624 18:14:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.624 spdk_app_start Round 2 00:07:24.624 18:14:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:24.625 18:14:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63445 /var/tmp/spdk-nbd.sock 00:07:24.625 18:14:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63445 ']' 00:07:24.625 18:14:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.625 18:14:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.625 18:14:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.625 18:14:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.625 18:14:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.883 18:14:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.883 18:14:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:24.883 18:14:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.142 Malloc0 00:07:25.142 18:14:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.455 Malloc1 00:07:25.714 18:14:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:25.714 /dev/nbd0 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.714 1+0 records in 00:07:25.714 1+0 records out 00:07:25.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339521 s, 12.1 MB/s 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:25.714 18:14:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.714 18:14:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:26.281 /dev/nbd1 00:07:26.281 18:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:26.281 18:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.281 1+0 records in 00:07:26.281 1+0 records out 00:07:26.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361099 s, 11.3 MB/s 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:26.281 18:14:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:26.281 18:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.281 18:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.281 18:14:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.281 18:14:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.281 18:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.540 { 00:07:26.540 "nbd_device": "/dev/nbd0", 00:07:26.540 "bdev_name": "Malloc0" 00:07:26.540 }, 00:07:26.540 { 00:07:26.540 "nbd_device": "/dev/nbd1", 00:07:26.540 "bdev_name": "Malloc1" 00:07:26.540 } 00:07:26.540 ]' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.540 { 00:07:26.540 "nbd_device": "/dev/nbd0", 00:07:26.540 "bdev_name": "Malloc0" 00:07:26.540 }, 00:07:26.540 { 00:07:26.540 "nbd_device": "/dev/nbd1", 00:07:26.540 "bdev_name": "Malloc1" 00:07:26.540 } 00:07:26.540 ]' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.540 /dev/nbd1' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.540 /dev/nbd1' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:26.540 256+0 records in 00:07:26.540 256+0 records out 00:07:26.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00847075 s, 124 MB/s 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.540 256+0 records in 00:07:26.540 256+0 records out 00:07:26.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306541 s, 34.2 MB/s 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.540 256+0 records in 00:07:26.540 256+0 records out 00:07:26.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0411071 s, 25.5 MB/s 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.540 18:14:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.798 18:14:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.056 18:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.313 18:14:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.313 18:14:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.878 18:14:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:29.258 [2024-07-22 18:14:41.026258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.258 [2024-07-22 18:14:41.263206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.258 [2024-07-22 18:14:41.263214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.516 [2024-07-22 18:14:41.459530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:29.516 [2024-07-22 18:14:41.459654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:30.890 18:14:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63445 /var/tmp/spdk-nbd.sock 00:07:30.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.890 18:14:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63445 ']' 00:07:30.890 18:14:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.890 18:14:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.890 18:14:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.890 18:14:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.890 18:14:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:31.149 18:14:43 event.app_repeat -- event/event.sh@39 -- # killprocess 63445 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63445 ']' 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63445 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63445 00:07:31.149 killing process with pid 63445 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63445' 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63445 00:07:31.149 18:14:43 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63445 00:07:32.541 spdk_app_start is called in Round 0. 00:07:32.541 Shutdown signal received, stop current app iteration 00:07:32.541 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:32.541 spdk_app_start is called in Round 1. 00:07:32.541 Shutdown signal received, stop current app iteration 00:07:32.541 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:32.542 spdk_app_start is called in Round 2. 00:07:32.542 Shutdown signal received, stop current app iteration 00:07:32.542 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:32.542 spdk_app_start is called in Round 3. 00:07:32.542 Shutdown signal received, stop current app iteration 00:07:32.542 18:14:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:32.542 18:14:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:32.542 00:07:32.542 real 0m20.886s 00:07:32.542 user 0m44.467s 00:07:32.542 sys 0m3.052s 00:07:32.542 ************************************ 00:07:32.542 END TEST app_repeat 00:07:32.542 ************************************ 00:07:32.542 18:14:44 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.542 18:14:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.542 18:14:44 event -- common/autotest_common.sh@1142 -- # return 0 00:07:32.542 18:14:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:32.542 18:14:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:32.542 18:14:44 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.542 18:14:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.542 18:14:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.542 ************************************ 00:07:32.542 START TEST cpu_locks 00:07:32.542 ************************************ 00:07:32.542 18:14:44 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:32.542 * Looking for test storage... 00:07:32.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:32.542 18:14:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:32.542 18:14:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:32.542 18:14:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:32.542 18:14:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:32.542 18:14:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.542 18:14:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.542 18:14:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.542 ************************************ 00:07:32.542 START TEST default_locks 00:07:32.542 ************************************ 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63905 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63905 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63905 ']' 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.542 18:14:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.802 [2024-07-22 18:14:44.559753] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:32.802 [2024-07-22 18:14:44.559951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63905 ] 00:07:32.802 [2024-07-22 18:14:44.737272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.188 [2024-07-22 18:14:45.009946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.124 18:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.124 18:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:34.124 18:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63905 00:07:34.124 18:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63905 00:07:34.124 18:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63905 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 63905 ']' 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 63905 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63905 00:07:34.383 killing process with pid 63905 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63905' 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 63905 00:07:34.383 18:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 63905 00:07:36.917 18:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63905 00:07:36.917 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:36.917 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63905 00:07:36.917 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:36.917 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.917 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:36.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.917 ERROR: process (pid: 63905) is no longer running 00:07:36.917 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 63905 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63905 ']' 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63905) - No such process 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:36.918 00:07:36.918 real 0m4.041s 00:07:36.918 user 0m4.030s 00:07:36.918 sys 0m0.685s 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.918 18:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.918 ************************************ 00:07:36.918 END TEST default_locks 00:07:36.918 ************************************ 00:07:36.918 18:14:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:36.918 18:14:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:36.918 18:14:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.918 18:14:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.918 18:14:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.918 ************************************ 00:07:36.918 START TEST default_locks_via_rpc 00:07:36.918 ************************************ 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63975 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63975 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63975 ']' 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.918 18:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.918 [2024-07-22 18:14:48.624363] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:36.918 [2024-07-22 18:14:48.624734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63975 ] 00:07:36.918 [2024-07-22 18:14:48.787774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.177 [2024-07-22 18:14:49.028127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63975 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63975 00:07:38.113 18:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63975 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 63975 ']' 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 63975 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63975 00:07:38.374 killing process with pid 63975 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63975' 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 63975 00:07:38.374 18:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 63975 00:07:40.913 00:07:40.913 real 0m3.942s 00:07:40.913 user 0m3.957s 00:07:40.913 sys 0m0.700s 00:07:40.913 18:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.913 ************************************ 00:07:40.913 END TEST default_locks_via_rpc 00:07:40.913 ************************************ 00:07:40.913 18:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.913 18:14:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:40.913 18:14:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:40.913 18:14:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.913 18:14:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.913 18:14:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.913 ************************************ 00:07:40.913 START TEST non_locking_app_on_locked_coremask 00:07:40.913 ************************************ 00:07:40.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64049 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64049 /var/tmp/spdk.sock 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64049 ']' 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.913 18:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.913 [2024-07-22 18:14:52.636245] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:40.913 [2024-07-22 18:14:52.636440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64049 ] 00:07:40.913 [2024-07-22 18:14:52.811325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.172 [2024-07-22 18:14:53.051048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64066 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64066 /var/tmp/spdk2.sock 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64066 ']' 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.107 18:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.107 [2024-07-22 18:14:53.959733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:42.107 [2024-07-22 18:14:53.960256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64066 ] 00:07:42.367 [2024-07-22 18:14:54.147794] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.367 [2024-07-22 18:14:54.147969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.626 [2024-07-22 18:14:54.610529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.159 18:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.159 18:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:45.159 18:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64049 00:07:45.159 18:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64049 00:07:45.159 18:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64049 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64049 ']' 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64049 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64049 00:07:45.417 killing process with pid 64049 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64049' 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64049 00:07:45.417 18:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64049 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64066 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64066 ']' 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64066 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64066 00:07:50.689 killing process with pid 64066 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64066' 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64066 00:07:50.689 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64066 00:07:52.063 ************************************ 00:07:52.063 END TEST non_locking_app_on_locked_coremask 00:07:52.063 ************************************ 00:07:52.063 00:07:52.063 real 0m11.566s 00:07:52.063 user 0m11.968s 00:07:52.063 sys 0m1.505s 00:07:52.063 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.063 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.322 18:15:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:52.322 18:15:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:52.322 18:15:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.322 18:15:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.322 18:15:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.322 ************************************ 00:07:52.322 START TEST locking_app_on_unlocked_coremask 00:07:52.322 ************************************ 00:07:52.322 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:52.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64213 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64213 /var/tmp/spdk.sock 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64213 ']' 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.323 18:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.323 [2024-07-22 18:15:04.252349] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:52.323 [2024-07-22 18:15:04.252808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64213 ] 00:07:52.581 [2024-07-22 18:15:04.424714] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.581 [2024-07-22 18:15:04.425493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.843 [2024-07-22 18:15:04.660041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64235 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64235 /var/tmp/spdk2.sock 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64235 ']' 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.779 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.779 [2024-07-22 18:15:05.576456] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:53.779 [2024-07-22 18:15:05.576973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64235 ] 00:07:53.779 [2024-07-22 18:15:05.766068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.347 [2024-07-22 18:15:06.239771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.248 18:15:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.248 18:15:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:56.248 18:15:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64235 00:07:56.248 18:15:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64235 00:07:56.248 18:15:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64213 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64213 ']' 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64213 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64213 00:07:57.183 killing process with pid 64213 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64213' 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64213 00:07:57.183 18:15:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64213 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64235 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64235 ']' 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64235 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64235 00:08:02.450 killing process with pid 64235 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64235' 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64235 00:08:02.450 18:15:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64235 00:08:03.823 00:08:03.823 real 0m11.668s 00:08:03.823 user 0m12.079s 00:08:03.823 sys 0m1.486s 00:08:03.823 18:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.823 ************************************ 00:08:03.823 END TEST locking_app_on_unlocked_coremask 00:08:03.823 ************************************ 00:08:03.823 18:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.129 18:15:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:04.129 18:15:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:04.129 18:15:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.129 18:15:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.129 18:15:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.129 ************************************ 00:08:04.129 START TEST locking_app_on_locked_coremask 00:08:04.129 ************************************ 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64383 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64383 /var/tmp/spdk.sock 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64383 ']' 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.129 18:15:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.129 [2024-07-22 18:15:15.978104] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:04.129 [2024-07-22 18:15:15.978300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64383 ] 00:08:04.388 [2024-07-22 18:15:16.157229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.646 [2024-07-22 18:15:16.429210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64404 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64404 /var/tmp/spdk2.sock 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64404 /var/tmp/spdk2.sock 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:05.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64404 /var/tmp/spdk2.sock 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64404 ']' 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.582 18:15:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.582 [2024-07-22 18:15:17.358314] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:05.582 [2024-07-22 18:15:17.358528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64404 ] 00:08:05.582 [2024-07-22 18:15:17.547459] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64383 has claimed it. 00:08:05.582 [2024-07-22 18:15:17.547579] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:06.148 ERROR: process (pid: 64404) is no longer running 00:08:06.148 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64404) - No such process 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64383 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64383 00:08:06.148 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64383 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64383 ']' 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64383 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64383 00:08:06.714 killing process with pid 64383 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64383' 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64383 00:08:06.714 18:15:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64383 00:08:09.251 ************************************ 00:08:09.251 END TEST locking_app_on_locked_coremask 00:08:09.251 ************************************ 00:08:09.251 00:08:09.251 real 0m4.848s 00:08:09.251 user 0m5.141s 00:08:09.251 sys 0m0.898s 00:08:09.251 18:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.251 18:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.251 18:15:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:09.251 18:15:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:09.251 18:15:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.251 18:15:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.251 18:15:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.251 ************************************ 00:08:09.251 START TEST locking_overlapped_coremask 00:08:09.251 ************************************ 00:08:09.251 18:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:08:09.251 18:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64474 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64474 /var/tmp/spdk.sock 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64474 ']' 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.252 18:15:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.252 [2024-07-22 18:15:20.885396] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:09.252 [2024-07-22 18:15:20.885599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64474 ] 00:08:09.252 [2024-07-22 18:15:21.052464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.510 [2024-07-22 18:15:21.294545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.510 [2024-07-22 18:15:21.294697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.510 [2024-07-22 18:15:21.294756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64492 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64492 /var/tmp/spdk2.sock 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64492 /var/tmp/spdk2.sock 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.076 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:10.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64492 /var/tmp/spdk2.sock 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64492 ']' 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.438 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.438 [2024-07-22 18:15:22.215072] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:10.438 [2024-07-22 18:15:22.215255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64492 ] 00:08:10.438 [2024-07-22 18:15:22.397468] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64474 has claimed it. 00:08:10.438 [2024-07-22 18:15:22.397566] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:11.007 ERROR: process (pid: 64492) is no longer running 00:08:11.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64492) - No such process 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64474 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64474 ']' 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64474 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64474 00:08:11.007 killing process with pid 64474 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64474' 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64474 00:08:11.007 18:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64474 00:08:13.540 ************************************ 00:08:13.540 END TEST locking_overlapped_coremask 00:08:13.540 ************************************ 00:08:13.540 00:08:13.540 real 0m4.359s 00:08:13.540 user 0m11.318s 00:08:13.540 sys 0m0.683s 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.540 18:15:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:13.540 18:15:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:13.540 18:15:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.540 18:15:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.540 18:15:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.540 ************************************ 00:08:13.540 START TEST locking_overlapped_coremask_via_rpc 00:08:13.540 ************************************ 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:08:13.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64556 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64556 /var/tmp/spdk.sock 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64556 ']' 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.540 18:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.540 [2024-07-22 18:15:25.281003] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:13.540 [2024-07-22 18:15:25.281169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64556 ] 00:08:13.540 [2024-07-22 18:15:25.443921] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.540 [2024-07-22 18:15:25.443984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.799 [2024-07-22 18:15:25.681461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.799 [2024-07-22 18:15:25.681583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.799 [2024-07-22 18:15:25.681595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:14.735 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64574 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64574 /var/tmp/spdk2.sock 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64574 ']' 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.736 18:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.736 [2024-07-22 18:15:26.598705] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:14.736 [2024-07-22 18:15:26.599382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64574 ] 00:08:14.994 [2024-07-22 18:15:26.781782] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.994 [2024-07-22 18:15:26.781853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.253 [2024-07-22 18:15:27.264550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.253 [2024-07-22 18:15:27.267758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.253 [2024-07-22 18:15:27.267769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 [2024-07-22 18:15:29.242981] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64556 has claimed it. 00:08:17.785 request: 00:08:17.785 { 00:08:17.785 "method": "framework_enable_cpumask_locks", 00:08:17.785 "req_id": 1 00:08:17.785 } 00:08:17.785 Got JSON-RPC error response 00:08:17.785 response: 00:08:17.785 { 00:08:17.785 "code": -32603, 00:08:17.785 "message": "Failed to claim CPU core: 2" 00:08:17.785 } 00:08:17.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64556 /var/tmp/spdk.sock 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64556 ']' 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64574 /var/tmp/spdk2.sock 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64574 ']' 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:17.785 ************************************ 00:08:17.785 END TEST locking_overlapped_coremask_via_rpc 00:08:17.785 ************************************ 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:17.785 00:08:17.785 real 0m4.555s 00:08:17.785 user 0m1.462s 00:08:17.785 sys 0m0.223s 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.785 18:15:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:17.785 18:15:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:17.785 18:15:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64556 ]] 00:08:17.785 18:15:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64556 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64556 ']' 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64556 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64556 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.785 killing process with pid 64556 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64556' 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64556 00:08:17.785 18:15:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64556 00:08:20.317 18:15:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64574 ]] 00:08:20.317 18:15:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64574 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64574 ']' 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64574 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64574 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:20.317 killing process with pid 64574 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64574' 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64574 00:08:20.317 18:15:32 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64574 00:08:22.847 18:15:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:22.847 Process with pid 64556 is not found 00:08:22.847 18:15:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:22.847 18:15:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64556 ]] 00:08:22.847 18:15:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64556 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64556 ']' 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64556 00:08:22.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64556) - No such process 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64556 is not found' 00:08:22.847 18:15:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64574 ]] 00:08:22.847 18:15:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64574 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64574 ']' 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64574 00:08:22.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64574) - No such process 00:08:22.847 Process with pid 64574 is not found 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64574 is not found' 00:08:22.847 18:15:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:22.847 ************************************ 00:08:22.847 END TEST cpu_locks 00:08:22.847 ************************************ 00:08:22.847 00:08:22.847 real 0m49.923s 00:08:22.847 user 1m23.563s 00:08:22.847 sys 0m7.318s 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.847 18:15:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.847 18:15:34 event -- common/autotest_common.sh@1142 -- # return 0 00:08:22.847 00:08:22.847 real 1m22.048s 00:08:22.847 user 2m24.379s 00:08:22.847 sys 0m11.480s 00:08:22.847 ************************************ 00:08:22.847 END TEST event 00:08:22.847 ************************************ 00:08:22.847 18:15:34 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.847 18:15:34 event -- common/autotest_common.sh@10 -- # set +x 00:08:22.847 18:15:34 -- common/autotest_common.sh@1142 -- # return 0 00:08:22.847 18:15:34 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:22.847 18:15:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:22.847 18:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.847 18:15:34 -- common/autotest_common.sh@10 -- # set +x 00:08:22.847 ************************************ 00:08:22.847 START TEST thread 00:08:22.847 ************************************ 00:08:22.847 18:15:34 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:22.847 * Looking for test storage... 00:08:22.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:22.847 18:15:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:22.847 18:15:34 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:22.847 18:15:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.847 18:15:34 thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.847 ************************************ 00:08:22.847 START TEST thread_poller_perf 00:08:22.847 ************************************ 00:08:22.848 18:15:34 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:22.848 [2024-07-22 18:15:34.470957] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:22.848 [2024-07-22 18:15:34.471308] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64761 ] 00:08:22.848 [2024-07-22 18:15:34.647442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.106 [2024-07-22 18:15:34.923309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.106 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:24.506 ====================================== 00:08:24.506 busy:2208477808 (cyc) 00:08:24.506 total_run_count: 309000 00:08:24.506 tsc_hz: 2200000000 (cyc) 00:08:24.506 ====================================== 00:08:24.506 poller_cost: 7147 (cyc), 3248 (nsec) 00:08:24.506 00:08:24.506 real 0m1.910s 00:08:24.506 user 0m1.680s 00:08:24.506 sys 0m0.119s 00:08:24.506 18:15:36 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.506 18:15:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.506 ************************************ 00:08:24.506 END TEST thread_poller_perf 00:08:24.506 ************************************ 00:08:24.506 18:15:36 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:24.506 18:15:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:24.506 18:15:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:24.506 18:15:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.506 18:15:36 thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.506 ************************************ 00:08:24.506 START TEST thread_poller_perf 00:08:24.506 ************************************ 00:08:24.506 18:15:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:24.506 [2024-07-22 18:15:36.429703] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:24.506 [2024-07-22 18:15:36.429865] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64803 ] 00:08:24.764 [2024-07-22 18:15:36.595409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.023 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:25.023 [2024-07-22 18:15:36.820791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.398 ====================================== 00:08:26.398 busy:2204452844 (cyc) 00:08:26.398 total_run_count: 4026000 00:08:26.398 tsc_hz: 2200000000 (cyc) 00:08:26.398 ====================================== 00:08:26.398 poller_cost: 547 (cyc), 248 (nsec) 00:08:26.398 ************************************ 00:08:26.398 END TEST thread_poller_perf 00:08:26.398 ************************************ 00:08:26.399 00:08:26.399 real 0m1.815s 00:08:26.399 user 0m1.585s 00:08:26.399 sys 0m0.121s 00:08:26.399 18:15:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.399 18:15:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:26.399 18:15:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:26.399 18:15:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:26.399 ************************************ 00:08:26.399 END TEST thread 00:08:26.399 ************************************ 00:08:26.399 00:08:26.399 real 0m3.903s 00:08:26.399 user 0m3.331s 00:08:26.399 sys 0m0.343s 00:08:26.399 18:15:38 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.399 18:15:38 thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.399 18:15:38 -- common/autotest_common.sh@1142 -- # return 0 00:08:26.399 18:15:38 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:26.399 18:15:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.399 18:15:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.399 18:15:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.399 ************************************ 00:08:26.399 START TEST accel 00:08:26.399 ************************************ 00:08:26.399 18:15:38 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:26.399 * Looking for test storage... 00:08:26.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:26.399 18:15:38 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:26.399 18:15:38 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:26.399 18:15:38 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:26.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.399 18:15:38 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=64879 00:08:26.399 18:15:38 accel -- accel/accel.sh@63 -- # waitforlisten 64879 00:08:26.399 18:15:38 accel -- common/autotest_common.sh@829 -- # '[' -z 64879 ']' 00:08:26.399 18:15:38 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.399 18:15:38 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.399 18:15:38 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:26.399 18:15:38 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.399 18:15:38 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.399 18:15:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:26.399 18:15:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.399 18:15:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.399 18:15:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.399 18:15:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.399 18:15:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.399 18:15:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.399 18:15:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:26.399 18:15:38 accel -- accel/accel.sh@41 -- # jq -r . 00:08:26.657 [2024-07-22 18:15:38.495633] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:26.658 [2024-07-22 18:15:38.496047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64879 ] 00:08:26.915 [2024-07-22 18:15:38.673470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.915 [2024-07-22 18:15:38.913625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.853 18:15:39 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.853 18:15:39 accel -- common/autotest_common.sh@862 -- # return 0 00:08:27.853 18:15:39 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:27.853 18:15:39 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:27.853 18:15:39 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:27.853 18:15:39 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:27.853 18:15:39 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:27.853 18:15:39 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:27.853 18:15:39 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.853 18:15:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.853 18:15:39 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:27.853 18:15:39 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.853 18:15:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.853 18:15:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.853 18:15:39 accel -- accel/accel.sh@75 -- # killprocess 64879 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@948 -- # '[' -z 64879 ']' 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@952 -- # kill -0 64879 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@953 -- # uname 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64879 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64879' 00:08:27.854 killing process with pid 64879 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@967 -- # kill 64879 00:08:27.854 18:15:39 accel -- common/autotest_common.sh@972 -- # wait 64879 00:08:30.390 18:15:41 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:30.390 18:15:41 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:30.390 18:15:41 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.390 18:15:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.390 18:15:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.390 18:15:41 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:30.390 18:15:41 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:30.390 18:15:41 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:30.391 18:15:41 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.391 18:15:41 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.391 18:15:41 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.391 18:15:41 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.391 18:15:41 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.391 18:15:41 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:30.391 18:15:41 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:30.391 18:15:42 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.391 18:15:42 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:30.391 18:15:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.391 18:15:42 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:30.391 18:15:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:30.391 18:15:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.391 18:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.391 ************************************ 00:08:30.391 START TEST accel_missing_filename 00:08:30.391 ************************************ 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.391 18:15:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:30.391 18:15:42 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:30.391 [2024-07-22 18:15:42.174332] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:30.391 [2024-07-22 18:15:42.174470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64954 ] 00:08:30.391 [2024-07-22 18:15:42.341049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.650 [2024-07-22 18:15:42.570922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.909 [2024-07-22 18:15:42.776539] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.492 [2024-07-22 18:15:43.259748] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:31.750 A filename is required. 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:31.750 ************************************ 00:08:31.750 END TEST accel_missing_filename 00:08:31.750 ************************************ 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.750 00:08:31.750 real 0m1.534s 00:08:31.750 user 0m1.279s 00:08:31.750 sys 0m0.197s 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.750 18:15:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:31.750 18:15:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.750 18:15:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.750 18:15:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:31.750 18:15:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.750 18:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.750 ************************************ 00:08:31.750 START TEST accel_compress_verify 00:08:31.750 ************************************ 00:08:31.750 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.750 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:31.751 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.751 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:31.751 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.751 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:31.751 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.751 18:15:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:31.751 18:15:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:31.751 [2024-07-22 18:15:43.759385] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:31.751 [2024-07-22 18:15:43.759527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64991 ] 00:08:32.010 [2024-07-22 18:15:43.925636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.268 [2024-07-22 18:15:44.161208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.527 [2024-07-22 18:15:44.367667] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.095 [2024-07-22 18:15:44.858769] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:33.355 00:08:33.355 Compression does not support the verify option, aborting. 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:33.355 ************************************ 00:08:33.355 END TEST accel_compress_verify 00:08:33.355 ************************************ 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.355 00:08:33.355 real 0m1.554s 00:08:33.355 user 0m1.299s 00:08:33.355 sys 0m0.189s 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.355 18:15:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:33.355 18:15:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.355 18:15:45 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:33.355 18:15:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:33.355 18:15:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.355 18:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.355 ************************************ 00:08:33.355 START TEST accel_wrong_workload 00:08:33.355 ************************************ 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.355 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:33.355 18:15:45 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:33.355 Unsupported workload type: foobar 00:08:33.355 [2024-07-22 18:15:45.361423] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:33.615 accel_perf options: 00:08:33.615 [-h help message] 00:08:33.615 [-q queue depth per core] 00:08:33.615 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:33.615 [-T number of threads per core 00:08:33.615 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:33.615 [-t time in seconds] 00:08:33.615 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:33.615 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:33.615 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:33.615 [-l for compress/decompress workloads, name of uncompressed input file 00:08:33.615 [-S for crc32c workload, use this seed value (default 0) 00:08:33.615 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:33.615 [-f for fill workload, use this BYTE value (default 255) 00:08:33.616 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:33.616 [-y verify result if this switch is on] 00:08:33.616 [-a tasks to allocate per core (default: same value as -q)] 00:08:33.616 Can be used to spread operations across a wider range of memory. 00:08:33.616 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:33.616 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.616 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:33.616 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.616 00:08:33.616 real 0m0.085s 00:08:33.616 user 0m0.105s 00:08:33.616 sys 0m0.048s 00:08:33.616 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.616 18:15:45 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 ************************************ 00:08:33.616 END TEST accel_wrong_workload 00:08:33.616 ************************************ 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.616 18:15:45 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 ************************************ 00:08:33.616 START TEST accel_negative_buffers 00:08:33.616 ************************************ 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:33.616 18:15:45 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:33.616 -x option must be non-negative. 00:08:33.616 [2024-07-22 18:15:45.487943] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:33.616 accel_perf options: 00:08:33.616 [-h help message] 00:08:33.616 [-q queue depth per core] 00:08:33.616 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:33.616 [-T number of threads per core 00:08:33.616 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:33.616 [-t time in seconds] 00:08:33.616 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:33.616 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:33.616 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:33.616 [-l for compress/decompress workloads, name of uncompressed input file 00:08:33.616 [-S for crc32c workload, use this seed value (default 0) 00:08:33.616 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:33.616 [-f for fill workload, use this BYTE value (default 255) 00:08:33.616 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:33.616 [-y verify result if this switch is on] 00:08:33.616 [-a tasks to allocate per core (default: same value as -q)] 00:08:33.616 Can be used to spread operations across a wider range of memory. 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.616 00:08:33.616 real 0m0.072s 00:08:33.616 user 0m0.072s 00:08:33.616 sys 0m0.040s 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.616 ************************************ 00:08:33.616 END TEST accel_negative_buffers 00:08:33.616 ************************************ 00:08:33.616 18:15:45 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.616 18:15:45 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.616 18:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 ************************************ 00:08:33.616 START TEST accel_crc32c 00:08:33.616 ************************************ 00:08:33.616 18:15:45 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:33.616 18:15:45 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:33.616 [2024-07-22 18:15:45.609988] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:33.616 [2024-07-22 18:15:45.610188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65069 ] 00:08:33.875 [2024-07-22 18:15:45.789125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.134 [2024-07-22 18:15:46.030343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.393 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 18:15:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.296 18:15:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.297 ************************************ 00:08:36.297 END TEST accel_crc32c 00:08:36.297 ************************************ 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:36.297 18:15:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.297 00:08:36.297 real 0m2.574s 00:08:36.297 user 0m2.284s 00:08:36.297 sys 0m0.194s 00:08:36.297 18:15:48 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.297 18:15:48 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:36.297 18:15:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.297 18:15:48 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:36.297 18:15:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:36.297 18:15:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.297 18:15:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.297 ************************************ 00:08:36.297 START TEST accel_crc32c_C2 00:08:36.297 ************************************ 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:36.297 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:36.297 [2024-07-22 18:15:48.235786] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:36.297 [2024-07-22 18:15:48.235993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65115 ] 00:08:36.556 [2024-07-22 18:15:48.424622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.815 [2024-07-22 18:15:48.663328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.075 18:15:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.051 00:08:39.051 real 0m2.594s 00:08:39.051 user 0m2.283s 00:08:39.051 sys 0m0.212s 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.051 18:15:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:39.051 ************************************ 00:08:39.051 END TEST accel_crc32c_C2 00:08:39.051 ************************************ 00:08:39.051 18:15:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:39.051 18:15:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:39.051 18:15:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:39.051 18:15:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.051 18:15:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.051 ************************************ 00:08:39.051 START TEST accel_copy 00:08:39.051 ************************************ 00:08:39.051 18:15:50 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:39.051 18:15:50 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:39.051 [2024-07-22 18:15:50.873318] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:39.051 [2024-07-22 18:15:50.873492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65162 ] 00:08:39.051 [2024-07-22 18:15:51.049279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.311 [2024-07-22 18:15:51.291422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.571 18:15:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:41.475 18:15:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:41.475 00:08:41.475 real 0m2.572s 00:08:41.475 user 0m2.255s 00:08:41.475 sys 0m0.216s 00:08:41.475 18:15:53 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.475 ************************************ 00:08:41.475 END TEST accel_copy 00:08:41.475 ************************************ 00:08:41.475 18:15:53 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:41.475 18:15:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:41.475 18:15:53 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:41.475 18:15:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:41.475 18:15:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.475 18:15:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.475 ************************************ 00:08:41.475 START TEST accel_fill 00:08:41.475 ************************************ 00:08:41.475 18:15:53 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:41.475 18:15:53 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:41.744 [2024-07-22 18:15:53.490380] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:41.744 [2024-07-22 18:15:53.490565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65214 ] 00:08:41.744 [2024-07-22 18:15:53.668068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.055 [2024-07-22 18:15:53.907185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.314 18:15:54 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.315 18:15:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:44.215 ************************************ 00:08:44.215 END TEST accel_fill 00:08:44.215 ************************************ 00:08:44.215 18:15:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.215 00:08:44.215 real 0m2.560s 00:08:44.215 user 0m2.251s 00:08:44.215 sys 0m0.217s 00:08:44.215 18:15:55 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.215 18:15:55 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:44.215 18:15:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:44.215 18:15:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:44.215 18:15:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:44.215 18:15:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.215 18:15:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.215 ************************************ 00:08:44.215 START TEST accel_copy_crc32c 00:08:44.215 ************************************ 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:44.215 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:44.215 [2024-07-22 18:15:56.101482] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:44.215 [2024-07-22 18:15:56.101656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65255 ] 00:08:44.475 [2024-07-22 18:15:56.279599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.734 [2024-07-22 18:15:56.514298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.734 18:15:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:46.636 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:46.637 00:08:46.637 real 0m2.563s 00:08:46.637 user 0m2.251s 00:08:46.637 sys 0m0.215s 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.637 18:15:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:46.637 ************************************ 00:08:46.637 END TEST accel_copy_crc32c 00:08:46.637 ************************************ 00:08:46.637 18:15:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:46.637 18:15:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:46.637 18:15:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:46.637 18:15:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.637 18:15:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:46.895 ************************************ 00:08:46.895 START TEST accel_copy_crc32c_C2 00:08:46.895 ************************************ 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:46.895 18:15:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:46.895 [2024-07-22 18:15:58.716769] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:46.895 [2024-07-22 18:15:58.717754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65307 ] 00:08:46.895 [2024-07-22 18:15:58.894312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.154 [2024-07-22 18:15:59.125716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:47.412 18:15:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.337 00:08:49.337 real 0m2.525s 00:08:49.337 user 0m2.226s 00:08:49.337 sys 0m0.203s 00:08:49.337 ************************************ 00:08:49.337 END TEST accel_copy_crc32c_C2 00:08:49.337 ************************************ 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.337 18:16:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:49.337 18:16:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:49.337 18:16:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:49.337 18:16:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:49.337 18:16:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.337 18:16:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.337 ************************************ 00:08:49.337 START TEST accel_dualcast 00:08:49.337 ************************************ 00:08:49.337 18:16:01 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:49.337 18:16:01 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:49.337 [2024-07-22 18:16:01.292523] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:49.337 [2024-07-22 18:16:01.292732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65348 ] 00:08:49.596 [2024-07-22 18:16:01.466760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.855 [2024-07-22 18:16:01.690331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.113 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:50.114 18:16:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:52.036 18:16:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:52.036 00:08:52.036 real 0m2.547s 00:08:52.036 user 0m2.252s 00:08:52.036 sys 0m0.198s 00:08:52.036 18:16:03 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.036 ************************************ 00:08:52.036 END TEST accel_dualcast 00:08:52.036 ************************************ 00:08:52.036 18:16:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:52.036 18:16:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:52.036 18:16:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:52.036 18:16:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:52.036 18:16:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.036 18:16:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:52.036 ************************************ 00:08:52.036 START TEST accel_compare 00:08:52.036 ************************************ 00:08:52.036 18:16:03 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:52.036 18:16:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:52.036 [2024-07-22 18:16:03.886749] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:52.036 [2024-07-22 18:16:03.886924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65395 ] 00:08:52.295 [2024-07-22 18:16:04.061315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.295 [2024-07-22 18:16:04.295161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:52.553 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:52.554 18:16:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:54.456 18:16:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.457 00:08:54.457 real 0m2.549s 00:08:54.457 user 0m0.017s 00:08:54.457 sys 0m0.001s 00:08:54.457 18:16:06 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.457 ************************************ 00:08:54.457 END TEST accel_compare 00:08:54.457 ************************************ 00:08:54.457 18:16:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:54.457 18:16:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:54.457 18:16:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:54.457 18:16:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:54.457 18:16:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.457 18:16:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:54.457 ************************************ 00:08:54.457 START TEST accel_xor 00:08:54.457 ************************************ 00:08:54.457 18:16:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:54.457 18:16:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:54.715 [2024-07-22 18:16:06.479323] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:54.715 [2024-07-22 18:16:06.479477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65441 ] 00:08:54.715 [2024-07-22 18:16:06.640253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.973 [2024-07-22 18:16:06.875072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.231 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.232 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:55.232 18:16:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:55.232 18:16:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:55.232 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:55.232 18:16:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.170 18:16:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:57.170 ************************************ 00:08:57.170 END TEST accel_xor 00:08:57.170 ************************************ 00:08:57.171 18:16:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:57.171 18:16:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:57.171 00:08:57.171 real 0m2.550s 00:08:57.171 user 0m2.274s 00:08:57.171 sys 0m0.175s 00:08:57.171 18:16:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.171 18:16:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:57.171 18:16:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:57.171 18:16:09 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:57.171 18:16:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:57.171 18:16:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.171 18:16:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:57.171 ************************************ 00:08:57.171 START TEST accel_xor 00:08:57.171 ************************************ 00:08:57.171 18:16:09 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:57.171 18:16:09 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:57.171 [2024-07-22 18:16:09.095617] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:57.171 [2024-07-22 18:16:09.095875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65488 ] 00:08:57.429 [2024-07-22 18:16:09.273651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.688 [2024-07-22 18:16:09.511840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.947 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:57.948 18:16:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:59.851 ************************************ 00:08:59.851 END TEST accel_xor 00:08:59.851 ************************************ 00:08:59.851 18:16:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:59.851 00:08:59.851 real 0m2.586s 00:08:59.851 user 0m2.282s 00:08:59.851 sys 0m0.203s 00:08:59.851 18:16:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.851 18:16:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:59.851 18:16:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:59.851 18:16:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:59.851 18:16:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:59.851 18:16:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.851 18:16:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.851 ************************************ 00:08:59.851 START TEST accel_dif_verify 00:08:59.851 ************************************ 00:08:59.851 18:16:11 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:59.851 18:16:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:59.851 18:16:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:59.852 18:16:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:59.852 [2024-07-22 18:16:11.731029] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:59.852 [2024-07-22 18:16:11.731208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65534 ] 00:09:00.111 [2024-07-22 18:16:11.907832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.370 [2024-07-22 18:16:12.141137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.370 18:16:12 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.371 18:16:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:02.272 18:16:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.272 00:09:02.272 real 0m2.553s 00:09:02.272 user 0m2.248s 00:09:02.272 sys 0m0.206s 00:09:02.272 ************************************ 00:09:02.272 END TEST accel_dif_verify 00:09:02.272 ************************************ 00:09:02.272 18:16:14 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.272 18:16:14 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:02.272 18:16:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:02.272 18:16:14 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:02.272 18:16:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:02.272 18:16:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.272 18:16:14 accel -- common/autotest_common.sh@10 -- # set +x 00:09:02.272 ************************************ 00:09:02.272 START TEST accel_dif_generate 00:09:02.272 ************************************ 00:09:02.273 18:16:14 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:02.273 18:16:14 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:02.531 [2024-07-22 18:16:14.327836] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:02.531 [2024-07-22 18:16:14.328003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65581 ] 00:09:02.531 [2024-07-22 18:16:14.492542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.789 [2024-07-22 18:16:14.726666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.047 18:16:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:04.950 18:16:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:04.950 00:09:04.950 real 0m2.551s 00:09:04.950 user 0m2.262s 00:09:04.950 sys 0m0.193s 00:09:04.950 18:16:16 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.950 18:16:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:04.950 ************************************ 00:09:04.950 END TEST accel_dif_generate 00:09:04.950 ************************************ 00:09:04.950 18:16:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:04.950 18:16:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:04.950 18:16:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:04.950 18:16:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.950 18:16:16 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.950 ************************************ 00:09:04.950 START TEST accel_dif_generate_copy 00:09:04.950 ************************************ 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:04.950 18:16:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:04.950 [2024-07-22 18:16:16.936321] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:04.950 [2024-07-22 18:16:16.936497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65632 ] 00:09:05.209 [2024-07-22 18:16:17.117817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.467 [2024-07-22 18:16:17.353391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:05.727 18:16:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:07.630 00:09:07.630 real 0m2.551s 00:09:07.630 user 0m2.257s 00:09:07.630 sys 0m0.200s 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.630 18:16:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:07.630 ************************************ 00:09:07.630 END TEST accel_dif_generate_copy 00:09:07.630 ************************************ 00:09:07.630 18:16:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:07.630 18:16:19 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:07.630 18:16:19 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:07.630 18:16:19 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:07.630 18:16:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.630 18:16:19 accel -- common/autotest_common.sh@10 -- # set +x 00:09:07.630 ************************************ 00:09:07.630 START TEST accel_comp 00:09:07.630 ************************************ 00:09:07.630 18:16:19 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:07.630 18:16:19 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:07.630 [2024-07-22 18:16:19.537908] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:07.630 [2024-07-22 18:16:19.538084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65674 ] 00:09:07.889 [2024-07-22 18:16:19.712140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.148 [2024-07-22 18:16:19.945045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.148 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.407 18:16:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:10.331 18:16:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:10.331 00:09:10.331 real 0m2.563s 00:09:10.331 user 0m2.264s 00:09:10.331 sys 0m0.206s 00:09:10.331 ************************************ 00:09:10.331 END TEST accel_comp 00:09:10.331 ************************************ 00:09:10.331 18:16:22 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.331 18:16:22 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:09:10.331 18:16:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:10.331 18:16:22 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:10.331 18:16:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:10.331 18:16:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.331 18:16:22 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.331 ************************************ 00:09:10.331 START TEST accel_decomp 00:09:10.331 ************************************ 00:09:10.331 18:16:22 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:09:10.331 18:16:22 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:09:10.331 [2024-07-22 18:16:22.137291] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:10.331 [2024-07-22 18:16:22.137435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65725 ] 00:09:10.331 [2024-07-22 18:16:22.303484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.590 [2024-07-22 18:16:22.534475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:10.849 18:16:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:12.754 18:16:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:12.754 00:09:12.754 real 0m2.597s 00:09:12.754 user 0m2.317s 00:09:12.754 sys 0m0.183s 00:09:12.754 18:16:24 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.754 18:16:24 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:09:12.754 ************************************ 00:09:12.754 END TEST accel_decomp 00:09:12.754 ************************************ 00:09:12.754 18:16:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:12.754 18:16:24 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:12.754 18:16:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:12.754 18:16:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.754 18:16:24 accel -- common/autotest_common.sh@10 -- # set +x 00:09:12.754 ************************************ 00:09:12.754 START TEST accel_decomp_full 00:09:12.754 ************************************ 00:09:12.754 18:16:24 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:09:12.754 18:16:24 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:09:13.023 [2024-07-22 18:16:24.790912] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:13.023 [2024-07-22 18:16:24.791107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65767 ] 00:09:13.023 [2024-07-22 18:16:24.962228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.281 [2024-07-22 18:16:25.230298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.540 18:16:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:15.444 18:16:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:15.444 00:09:15.444 real 0m2.586s 00:09:15.444 user 0m2.290s 00:09:15.444 sys 0m0.199s 00:09:15.444 18:16:27 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.444 18:16:27 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:09:15.444 ************************************ 00:09:15.444 END TEST accel_decomp_full 00:09:15.444 ************************************ 00:09:15.444 18:16:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:15.444 18:16:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:15.444 18:16:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:15.444 18:16:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.444 18:16:27 accel -- common/autotest_common.sh@10 -- # set +x 00:09:15.444 ************************************ 00:09:15.444 START TEST accel_decomp_mcore 00:09:15.444 ************************************ 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:15.444 18:16:27 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:15.444 [2024-07-22 18:16:27.426547] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:15.444 [2024-07-22 18:16:27.426764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65814 ] 00:09:15.703 [2024-07-22 18:16:27.604821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.961 [2024-07-22 18:16:27.879933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.961 [2024-07-22 18:16:27.880062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.962 [2024-07-22 18:16:27.880379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.962 [2024-07-22 18:16:27.880379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.220 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.221 18:16:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.155 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.156 18:16:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:18.156 00:09:18.156 real 0m2.647s 00:09:18.156 user 0m0.019s 00:09:18.156 sys 0m0.004s 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.156 18:16:30 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:18.156 ************************************ 00:09:18.156 END TEST accel_decomp_mcore 00:09:18.156 ************************************ 00:09:18.156 18:16:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:18.156 18:16:30 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:18.156 18:16:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:18.156 18:16:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.156 18:16:30 accel -- common/autotest_common.sh@10 -- # set +x 00:09:18.156 ************************************ 00:09:18.156 START TEST accel_decomp_full_mcore 00:09:18.156 ************************************ 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:18.156 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:18.156 [2024-07-22 18:16:30.111625] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:18.156 [2024-07-22 18:16:30.111830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65863 ] 00:09:18.413 [2024-07-22 18:16:30.280496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.671 [2024-07-22 18:16:30.524667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.671 [2024-07-22 18:16:30.524822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.671 [2024-07-22 18:16:30.524927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.671 [2024-07-22 18:16:30.524936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.930 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:18.931 18:16:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.855 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:20.856 00:09:20.856 real 0m2.626s 00:09:20.856 user 0m0.016s 00:09:20.856 sys 0m0.006s 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.856 18:16:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:20.856 ************************************ 00:09:20.856 END TEST accel_decomp_full_mcore 00:09:20.856 ************************************ 00:09:20.856 18:16:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:20.856 18:16:32 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:20.856 18:16:32 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:20.856 18:16:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.856 18:16:32 accel -- common/autotest_common.sh@10 -- # set +x 00:09:20.856 ************************************ 00:09:20.856 START TEST accel_decomp_mthread 00:09:20.856 ************************************ 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:20.856 18:16:32 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:20.856 [2024-07-22 18:16:32.789048] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:20.856 [2024-07-22 18:16:32.789270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65917 ] 00:09:21.114 [2024-07-22 18:16:32.964669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.373 [2024-07-22 18:16:33.201812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.633 18:16:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.536 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:23.537 00:09:23.537 real 0m2.576s 00:09:23.537 user 0m2.294s 00:09:23.537 sys 0m0.189s 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.537 18:16:35 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:23.537 ************************************ 00:09:23.537 END TEST accel_decomp_mthread 00:09:23.537 ************************************ 00:09:23.537 18:16:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:23.537 18:16:35 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:23.537 18:16:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:23.537 18:16:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.537 18:16:35 accel -- common/autotest_common.sh@10 -- # set +x 00:09:23.537 ************************************ 00:09:23.537 START TEST accel_decomp_full_mthread 00:09:23.537 ************************************ 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:23.537 18:16:35 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:23.537 [2024-07-22 18:16:35.420911] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:23.537 [2024-07-22 18:16:35.421107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65965 ] 00:09:23.796 [2024-07-22 18:16:35.597764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.054 [2024-07-22 18:16:35.838826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.054 18:16:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:26.587 00:09:26.587 real 0m2.629s 00:09:26.587 user 0m2.329s 00:09:26.587 sys 0m0.206s 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.587 ************************************ 00:09:26.587 18:16:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:26.587 END TEST accel_decomp_full_mthread 00:09:26.587 ************************************ 00:09:26.587 18:16:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:26.587 18:16:38 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:26.587 18:16:38 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:26.587 18:16:38 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:26.587 18:16:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:26.587 18:16:38 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:26.587 18:16:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:26.587 18:16:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.587 18:16:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.587 18:16:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.587 18:16:38 accel -- common/autotest_common.sh@10 -- # set +x 00:09:26.587 18:16:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:26.587 18:16:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:26.587 18:16:38 accel -- accel/accel.sh@41 -- # jq -r . 00:09:26.587 ************************************ 00:09:26.587 START TEST accel_dif_functional_tests 00:09:26.587 ************************************ 00:09:26.587 18:16:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:26.587 [2024-07-22 18:16:38.157759] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:26.587 [2024-07-22 18:16:38.157956] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66007 ] 00:09:26.587 [2024-07-22 18:16:38.340056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.846 [2024-07-22 18:16:38.621542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.846 [2024-07-22 18:16:38.621609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.846 [2024-07-22 18:16:38.621612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.104 00:09:27.104 00:09:27.104 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.104 http://cunit.sourceforge.net/ 00:09:27.104 00:09:27.104 00:09:27.104 Suite: accel_dif 00:09:27.104 Test: verify: DIF generated, GUARD check ...passed 00:09:27.104 Test: verify: DIF generated, APPTAG check ...passed 00:09:27.104 Test: verify: DIF generated, REFTAG check ...passed 00:09:27.104 Test: verify: DIF not generated, GUARD check ...passed 00:09:27.104 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 18:16:38.937900] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:27.104 [2024-07-22 18:16:38.938037] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:27.104 passed 00:09:27.104 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 18:16:38.938106] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:27.104 passed 00:09:27.104 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:27.104 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 18:16:38.938337] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:27.104 passed 00:09:27.104 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:27.104 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:27.104 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:27.104 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:27.104 Test: verify copy: DIF generated, GUARD check ...[2024-07-22 18:16:38.938691] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:27.104 passed 00:09:27.104 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:27.104 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:27.104 Test: verify copy: DIF not generated, GUARD check ...passed 00:09:27.104 Test: verify copy: DIF not generated, APPTAG check ...passed 00:09:27.104 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 18:16:38.939027] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:27.104 [2024-07-22 18:16:38.939113] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:27.104 [2024-07-22 18:16:38.939206] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:27.104 passed 00:09:27.104 Test: generate copy: DIF generated, GUARD check ...passed 00:09:27.104 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:27.104 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:27.105 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:27.105 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:27.105 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:27.105 Test: generate copy: iovecs-len validate ...passed 00:09:27.105 Test: generate copy: buffer alignment validate ...passed[2024-07-22 18:16:38.939784] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:27.105 00:09:27.105 00:09:27.105 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.105 suites 1 1 n/a 0 0 00:09:27.105 tests 26 26 26 0 0 00:09:27.105 asserts 115 115 115 0 n/a 00:09:27.105 00:09:27.105 Elapsed time = 0.007 seconds 00:09:28.531 00:09:28.531 real 0m2.061s 00:09:28.531 user 0m3.809s 00:09:28.531 sys 0m0.286s 00:09:28.531 18:16:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.531 18:16:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:28.531 ************************************ 00:09:28.531 END TEST accel_dif_functional_tests 00:09:28.531 ************************************ 00:09:28.531 18:16:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:28.531 ************************************ 00:09:28.531 END TEST accel 00:09:28.531 ************************************ 00:09:28.531 00:09:28.531 real 1m1.857s 00:09:28.531 user 1m6.499s 00:09:28.531 sys 0m6.221s 00:09:28.531 18:16:40 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.531 18:16:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:28.531 18:16:40 -- common/autotest_common.sh@1142 -- # return 0 00:09:28.531 18:16:40 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:28.531 18:16:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:28.531 18:16:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.531 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:09:28.531 ************************************ 00:09:28.531 START TEST accel_rpc 00:09:28.531 ************************************ 00:09:28.531 18:16:40 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:28.531 * Looking for test storage... 00:09:28.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:28.531 18:16:40 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:28.531 18:16:40 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66094 00:09:28.531 18:16:40 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:28.531 18:16:40 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66094 00:09:28.531 18:16:40 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66094 ']' 00:09:28.531 18:16:40 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.531 18:16:40 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.531 18:16:40 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.531 18:16:40 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.531 18:16:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.531 [2024-07-22 18:16:40.378888] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:28.531 [2024-07-22 18:16:40.379032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66094 ] 00:09:28.789 [2024-07-22 18:16:40.547179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.048 [2024-07-22 18:16:40.823078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.306 18:16:41 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.306 18:16:41 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:29.306 18:16:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:29.306 18:16:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:29.306 18:16:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:29.306 18:16:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:29.306 18:16:41 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:29.306 18:16:41 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:29.306 18:16:41 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.306 18:16:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.306 ************************************ 00:09:29.306 START TEST accel_assign_opcode 00:09:29.306 ************************************ 00:09:29.306 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:29.306 18:16:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:29.306 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.306 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:29.306 [2024-07-22 18:16:41.320046] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:29.567 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.567 18:16:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:29.567 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.567 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:29.567 [2024-07-22 18:16:41.327998] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:29.567 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.567 18:16:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:29.567 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.568 18:16:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:30.134 18:16:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.134 18:16:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:30.134 18:16:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:30.134 18:16:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.134 18:16:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:30.134 18:16:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:30.134 18:16:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.393 software 00:09:30.393 00:09:30.393 real 0m0.847s 00:09:30.393 user 0m0.059s 00:09:30.393 sys 0m0.010s 00:09:30.393 18:16:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.393 18:16:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:30.393 ************************************ 00:09:30.393 END TEST accel_assign_opcode 00:09:30.393 ************************************ 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:30.393 18:16:42 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66094 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66094 ']' 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66094 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66094 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.393 killing process with pid 66094 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66094' 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@967 -- # kill 66094 00:09:30.393 18:16:42 accel_rpc -- common/autotest_common.sh@972 -- # wait 66094 00:09:32.952 00:09:32.952 real 0m4.235s 00:09:32.952 user 0m4.193s 00:09:32.952 sys 0m0.563s 00:09:32.952 18:16:44 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.952 18:16:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.952 ************************************ 00:09:32.952 END TEST accel_rpc 00:09:32.952 ************************************ 00:09:32.952 18:16:44 -- common/autotest_common.sh@1142 -- # return 0 00:09:32.952 18:16:44 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:32.952 18:16:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:32.952 18:16:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.952 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:09:32.952 ************************************ 00:09:32.952 START TEST app_cmdline 00:09:32.952 ************************************ 00:09:32.952 18:16:44 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:32.952 * Looking for test storage... 00:09:32.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:32.952 18:16:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:32.952 18:16:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66215 00:09:32.952 18:16:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66215 00:09:32.952 18:16:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:32.952 18:16:44 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66215 ']' 00:09:32.952 18:16:44 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.952 18:16:44 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.952 18:16:44 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.952 18:16:44 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.952 18:16:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:32.952 [2024-07-22 18:16:44.715493] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:32.952 [2024-07-22 18:16:44.715698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66215 ] 00:09:32.952 [2024-07-22 18:16:44.894298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.210 [2024-07-22 18:16:45.174141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.145 18:16:45 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.145 18:16:45 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:34.145 18:16:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:34.404 { 00:09:34.404 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:09:34.404 "fields": { 00:09:34.404 "major": 24, 00:09:34.404 "minor": 9, 00:09:34.404 "patch": 0, 00:09:34.404 "suffix": "-pre", 00:09:34.404 "commit": "f7b31b2b9" 00:09:34.404 } 00:09:34.404 } 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:34.404 18:16:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:34.404 18:16:46 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.663 request: 00:09:34.663 { 00:09:34.663 "method": "env_dpdk_get_mem_stats", 00:09:34.664 "req_id": 1 00:09:34.664 } 00:09:34.664 Got JSON-RPC error response 00:09:34.664 response: 00:09:34.664 { 00:09:34.664 "code": -32601, 00:09:34.664 "message": "Method not found" 00:09:34.664 } 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:34.664 18:16:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66215 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66215 ']' 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66215 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66215 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.664 killing process with pid 66215 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66215' 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@967 -- # kill 66215 00:09:34.664 18:16:46 app_cmdline -- common/autotest_common.sh@972 -- # wait 66215 00:09:37.226 00:09:37.226 real 0m4.317s 00:09:37.226 user 0m4.675s 00:09:37.226 sys 0m0.651s 00:09:37.226 18:16:48 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.226 ************************************ 00:09:37.226 END TEST app_cmdline 00:09:37.226 ************************************ 00:09:37.226 18:16:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:37.226 18:16:48 -- common/autotest_common.sh@1142 -- # return 0 00:09:37.226 18:16:48 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:37.226 18:16:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:37.226 18:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.226 18:16:48 -- common/autotest_common.sh@10 -- # set +x 00:09:37.226 ************************************ 00:09:37.226 START TEST version 00:09:37.226 ************************************ 00:09:37.226 18:16:48 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:37.226 * Looking for test storage... 00:09:37.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:37.226 18:16:48 version -- app/version.sh@17 -- # get_header_version major 00:09:37.226 18:16:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # cut -f2 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:37.226 18:16:48 version -- app/version.sh@17 -- # major=24 00:09:37.226 18:16:48 version -- app/version.sh@18 -- # get_header_version minor 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # cut -f2 00:09:37.226 18:16:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:37.226 18:16:48 version -- app/version.sh@18 -- # minor=9 00:09:37.226 18:16:48 version -- app/version.sh@19 -- # get_header_version patch 00:09:37.226 18:16:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # cut -f2 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:37.226 18:16:48 version -- app/version.sh@19 -- # patch=0 00:09:37.226 18:16:48 version -- app/version.sh@20 -- # get_header_version suffix 00:09:37.226 18:16:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # cut -f2 00:09:37.226 18:16:48 version -- app/version.sh@14 -- # tr -d '"' 00:09:37.226 18:16:48 version -- app/version.sh@20 -- # suffix=-pre 00:09:37.226 18:16:48 version -- app/version.sh@22 -- # version=24.9 00:09:37.226 18:16:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:37.226 18:16:48 version -- app/version.sh@28 -- # version=24.9rc0 00:09:37.226 18:16:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:37.226 18:16:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:37.226 18:16:48 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:37.227 18:16:48 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:37.227 00:09:37.227 real 0m0.141s 00:09:37.227 user 0m0.084s 00:09:37.227 sys 0m0.088s 00:09:37.227 ************************************ 00:09:37.227 END TEST version 00:09:37.227 ************************************ 00:09:37.227 18:16:48 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.227 18:16:48 version -- common/autotest_common.sh@10 -- # set +x 00:09:37.227 18:16:49 -- common/autotest_common.sh@1142 -- # return 0 00:09:37.227 18:16:49 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:37.227 18:16:49 -- spdk/autotest.sh@198 -- # uname -s 00:09:37.227 18:16:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:37.227 18:16:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:37.227 18:16:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:37.227 18:16:49 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:09:37.227 18:16:49 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:37.227 18:16:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:37.227 18:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.227 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:09:37.227 ************************************ 00:09:37.227 START TEST blockdev_nvme 00:09:37.227 ************************************ 00:09:37.227 18:16:49 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:37.227 * Looking for test storage... 00:09:37.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:37.227 18:16:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66383 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66383 00:09:37.227 18:16:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:37.227 18:16:49 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66383 ']' 00:09:37.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.227 18:16:49 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.227 18:16:49 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.227 18:16:49 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.227 18:16:49 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.227 18:16:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.486 [2024-07-22 18:16:49.262698] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:37.486 [2024-07-22 18:16:49.262895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66383 ] 00:09:37.486 [2024-07-22 18:16:49.440194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.745 [2024-07-22 18:16:49.695454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.701 18:16:50 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.701 18:16:50 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:09:38.701 18:16:50 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:38.701 18:16:50 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:09:38.701 18:16:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:38.701 18:16:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:38.701 18:16:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.701 18:16:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:38.701 18:16:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.701 18:16:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.961 18:16:50 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.961 18:16:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:09:38.961 18:16:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.961 18:16:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.961 18:16:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.221 18:16:51 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.221 18:16:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:39.221 18:16:51 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.221 18:16:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.221 18:16:51 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.221 18:16:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:39.221 18:16:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:39.221 18:16:51 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.221 18:16:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.221 18:16:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:39.221 18:16:51 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.221 18:16:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:39.221 18:16:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:39.222 18:16:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b8742721-bd3f-4e76-bb2d-6f858c78603e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b8742721-bd3f-4e76-bb2d-6f858c78603e",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ba5b52bb-1658-4a2a-9ee1-4769ebd8cdef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ba5b52bb-1658-4a2a-9ee1-4769ebd8cdef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aa28a65b-e64e-44f6-a0cb-13f4bf3c51a9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa28a65b-e64e-44f6-a0cb-13f4bf3c51a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0a221864-3b98-446a-9da3-dc2b8a4fdc51"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0a221864-3b98-446a-9da3-dc2b8a4fdc51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0344f320-1675-4431-be20-37959edab89b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0344f320-1675-4431-be20-37959edab89b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4f5d4449-7c90-4b5f-bdd7-688db80104d0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4f5d4449-7c90-4b5f-bdd7-688db80104d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:39.222 18:16:51 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:39.222 18:16:51 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:39.222 18:16:51 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:39.222 18:16:51 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 66383 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66383 ']' 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66383 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66383 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:39.222 killing process with pid 66383 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66383' 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66383 00:09:39.222 18:16:51 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66383 00:09:41.757 18:16:53 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:41.757 18:16:53 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:41.757 18:16:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:41.757 18:16:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.757 18:16:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.757 ************************************ 00:09:41.757 START TEST bdev_hello_world 00:09:41.757 ************************************ 00:09:41.757 18:16:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:41.757 [2024-07-22 18:16:53.603045] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:41.757 [2024-07-22 18:16:53.603261] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66478 ] 00:09:42.016 [2024-07-22 18:16:53.783240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.277 [2024-07-22 18:16:54.072414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.845 [2024-07-22 18:16:54.714764] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:42.845 [2024-07-22 18:16:54.714833] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:42.845 [2024-07-22 18:16:54.714879] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:42.846 [2024-07-22 18:16:54.717898] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:42.846 [2024-07-22 18:16:54.718543] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:42.846 [2024-07-22 18:16:54.718577] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:42.846 [2024-07-22 18:16:54.718798] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:42.846 00:09:42.846 [2024-07-22 18:16:54.718829] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:43.782 00:09:43.782 real 0m2.285s 00:09:43.782 user 0m1.883s 00:09:43.782 sys 0m0.290s 00:09:43.782 18:16:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.782 ************************************ 00:09:43.782 END TEST bdev_hello_world 00:09:43.782 ************************************ 00:09:43.782 18:16:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 18:16:55 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:44.041 18:16:55 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:44.041 18:16:55 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.041 18:16:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.041 18:16:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 ************************************ 00:09:44.041 START TEST bdev_bounds 00:09:44.041 ************************************ 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66526 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66526' 00:09:44.041 Process bdevio pid: 66526 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66526 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66526 ']' 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.041 18:16:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:44.041 [2024-07-22 18:16:55.925993] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:44.041 [2024-07-22 18:16:55.926184] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66526 ] 00:09:44.300 [2024-07-22 18:16:56.102411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.558 [2024-07-22 18:16:56.337510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.558 [2024-07-22 18:16:56.337646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.558 [2024-07-22 18:16:56.337716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.124 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.124 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:45.124 18:16:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:45.124 I/O targets: 00:09:45.124 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:45.124 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:45.124 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:45.124 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:45.124 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:45.124 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:45.124 00:09:45.124 00:09:45.124 CUnit - A unit testing framework for C - Version 2.1-3 00:09:45.124 http://cunit.sourceforge.net/ 00:09:45.124 00:09:45.124 00:09:45.124 Suite: bdevio tests on: Nvme3n1 00:09:45.124 Test: blockdev write read block ...passed 00:09:45.124 Test: blockdev write zeroes read block ...passed 00:09:45.124 Test: blockdev write zeroes read no split ...passed 00:09:45.382 Test: blockdev write zeroes read split ...passed 00:09:45.382 Test: blockdev write zeroes read split partial ...passed 00:09:45.382 Test: blockdev reset ...[2024-07-22 18:16:57.189398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:45.382 passed 00:09:45.383 Test: blockdev write read 8 blocks ...[2024-07-22 18:16:57.193246] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:45.383 passed 00:09:45.383 Test: blockdev write read size > 128k ...passed 00:09:45.383 Test: blockdev write read invalid size ...passed 00:09:45.383 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.383 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.383 Test: blockdev write read max offset ...passed 00:09:45.383 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.383 Test: blockdev writev readv 8 blocks ...passed 00:09:45.383 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.383 Test: blockdev writev readv block ...passed 00:09:45.383 Test: blockdev writev readv size > 128k ...passed 00:09:45.383 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.383 Test: blockdev comparev and writev ...[2024-07-22 18:16:57.201096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27120a000 len:0x1000 00:09:45.383 [2024-07-22 18:16:57.201155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:45.383 passed 00:09:45.383 Test: blockdev nvme passthru rw ...passed 00:09:45.383 Test: blockdev nvme passthru vendor specific ...passed 00:09:45.383 Test: blockdev nvme admin passthru ...[2024-07-22 18:16:57.202029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:45.383 [2024-07-22 18:16:57.202066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:45.383 passed 00:09:45.383 Test: blockdev copy ...passed 00:09:45.383 Suite: bdevio tests on: Nvme2n3 00:09:45.383 Test: blockdev write read block ...passed 00:09:45.383 Test: blockdev write zeroes read block ...passed 00:09:45.383 Test: blockdev write zeroes read no split ...passed 00:09:45.383 Test: blockdev write zeroes read split ...passed 00:09:45.383 Test: blockdev write zeroes read split partial ...passed 00:09:45.383 Test: blockdev reset ...[2024-07-22 18:16:57.271080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:45.383 passed 00:09:45.383 Test: blockdev write read 8 blocks ...[2024-07-22 18:16:57.275355] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:45.383 passed 00:09:45.383 Test: blockdev write read size > 128k ...passed 00:09:45.383 Test: blockdev write read invalid size ...passed 00:09:45.383 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.383 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.383 Test: blockdev write read max offset ...passed 00:09:45.383 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.383 Test: blockdev writev readv 8 blocks ...passed 00:09:45.383 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.383 Test: blockdev writev readv block ...passed 00:09:45.383 Test: blockdev writev readv size > 128k ...passed 00:09:45.383 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.383 Test: blockdev comparev and writev ...[2024-07-22 18:16:57.282692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280a04000 len:0x1000 00:09:45.383 [2024-07-22 18:16:57.282742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:45.383 passed 00:09:45.383 Test: blockdev nvme passthru rw ...passed 00:09:45.383 Test: blockdev nvme passthru vendor specific ...passed 00:09:45.383 Test: blockdev nvme admin passthru ...[2024-07-22 18:16:57.283573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:45.383 [2024-07-22 18:16:57.283609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:45.383 passed 00:09:45.383 Test: blockdev copy ...passed 00:09:45.383 Suite: bdevio tests on: Nvme2n2 00:09:45.383 Test: blockdev write read block ...passed 00:09:45.383 Test: blockdev write zeroes read block ...passed 00:09:45.383 Test: blockdev write zeroes read no split ...passed 00:09:45.383 Test: blockdev write zeroes read split ...passed 00:09:45.383 Test: blockdev write zeroes read split partial ...passed 00:09:45.383 Test: blockdev reset ...[2024-07-22 18:16:57.353169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:45.383 passed 00:09:45.383 Test: blockdev write read 8 blocks ...[2024-07-22 18:16:57.357317] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:45.383 passed 00:09:45.383 Test: blockdev write read size > 128k ...passed 00:09:45.383 Test: blockdev write read invalid size ...passed 00:09:45.383 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.383 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.383 Test: blockdev write read max offset ...passed 00:09:45.383 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.383 Test: blockdev writev readv 8 blocks ...passed 00:09:45.383 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.383 Test: blockdev writev readv block ...passed 00:09:45.383 Test: blockdev writev readv size > 128k ...passed 00:09:45.383 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.383 Test: blockdev comparev and writev ...[2024-07-22 18:16:57.364698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27cc3a000 len:0x1000 00:09:45.383 [2024-07-22 18:16:57.364754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:45.383 passed 00:09:45.383 Test: blockdev nvme passthru rw ...passed 00:09:45.383 Test: blockdev nvme passthru vendor specific ...passed 00:09:45.383 Test: blockdev nvme admin passthru ...[2024-07-22 18:16:57.365484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:45.383 [2024-07-22 18:16:57.365523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:45.383 passed 00:09:45.383 Test: blockdev copy ...passed 00:09:45.383 Suite: bdevio tests on: Nvme2n1 00:09:45.383 Test: blockdev write read block ...passed 00:09:45.383 Test: blockdev write zeroes read block ...passed 00:09:45.383 Test: blockdev write zeroes read no split ...passed 00:09:45.642 Test: blockdev write zeroes read split ...passed 00:09:45.642 Test: blockdev write zeroes read split partial ...passed 00:09:45.642 Test: blockdev reset ...[2024-07-22 18:16:57.434416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:45.642 [2024-07-22 18:16:57.439020] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:45.642 passed 00:09:45.642 Test: blockdev write read 8 blocks ...passed 00:09:45.642 Test: blockdev write read size > 128k ...passed 00:09:45.642 Test: blockdev write read invalid size ...passed 00:09:45.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.642 Test: blockdev write read max offset ...passed 00:09:45.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.642 Test: blockdev writev readv 8 blocks ...passed 00:09:45.642 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.642 Test: blockdev writev readv block ...passed 00:09:45.642 Test: blockdev writev readv size > 128k ...passed 00:09:45.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.642 Test: blockdev comparev and writev ...passed 00:09:45.642 Test: blockdev nvme passthru rw ...[2024-07-22 18:16:57.446204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27cc34000 len:0x1000 00:09:45.642 [2024-07-22 18:16:57.446259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:45.642 passed 00:09:45.642 Test: blockdev nvme passthru vendor specific ...passed 00:09:45.642 Test: blockdev nvme admin passthru ...[2024-07-22 18:16:57.446986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:45.642 [2024-07-22 18:16:57.447029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:45.642 passed 00:09:45.642 Test: blockdev copy ...passed 00:09:45.642 Suite: bdevio tests on: Nvme1n1 00:09:45.642 Test: blockdev write read block ...passed 00:09:45.642 Test: blockdev write zeroes read block ...passed 00:09:45.642 Test: blockdev write zeroes read no split ...passed 00:09:45.642 Test: blockdev write zeroes read split ...passed 00:09:45.642 Test: blockdev write zeroes read split partial ...passed 00:09:45.642 Test: blockdev reset ...[2024-07-22 18:16:57.512750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:45.642 passed 00:09:45.642 Test: blockdev write read 8 blocks ...[2024-07-22 18:16:57.516397] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:45.642 passed 00:09:45.642 Test: blockdev write read size > 128k ...passed 00:09:45.642 Test: blockdev write read invalid size ...passed 00:09:45.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.642 Test: blockdev write read max offset ...passed 00:09:45.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.642 Test: blockdev writev readv 8 blocks ...passed 00:09:45.642 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.642 Test: blockdev writev readv block ...passed 00:09:45.642 Test: blockdev writev readv size > 128k ...passed 00:09:45.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.642 Test: blockdev comparev and writev ...[2024-07-22 18:16:57.523489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27cc30000 len:0x1000 00:09:45.642 [2024-07-22 18:16:57.523546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:45.642 passed 00:09:45.642 Test: blockdev nvme passthru rw ...passed 00:09:45.642 Test: blockdev nvme passthru vendor specific ...passed 00:09:45.642 Test: blockdev nvme admin passthru ...[2024-07-22 18:16:57.524393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:45.642 [2024-07-22 18:16:57.524431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:45.642 passed 00:09:45.642 Test: blockdev copy ...passed 00:09:45.642 Suite: bdevio tests on: Nvme0n1 00:09:45.642 Test: blockdev write read block ...passed 00:09:45.642 Test: blockdev write zeroes read block ...passed 00:09:45.642 Test: blockdev write zeroes read no split ...passed 00:09:45.642 Test: blockdev write zeroes read split ...passed 00:09:45.642 Test: blockdev write zeroes read split partial ...passed 00:09:45.642 Test: blockdev reset ...[2024-07-22 18:16:57.590738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:45.642 passed 00:09:45.642 Test: blockdev write read 8 blocks ...[2024-07-22 18:16:57.594609] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:45.642 passed 00:09:45.642 Test: blockdev write read size > 128k ...passed 00:09:45.642 Test: blockdev write read invalid size ...passed 00:09:45.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.642 Test: blockdev write read max offset ...passed 00:09:45.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.642 Test: blockdev writev readv 8 blocks ...passed 00:09:45.642 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.642 Test: blockdev writev readv block ...passed 00:09:45.642 Test: blockdev writev readv size > 128k ...passed 00:09:45.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.642 Test: blockdev comparev and writev ...passed 00:09:45.642 Test: blockdev nvme passthru rw ...[2024-07-22 18:16:57.601291] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:45.642 separate metadata which is not supported yet. 00:09:45.642 passed 00:09:45.642 Test: blockdev nvme passthru vendor specific ...passed 00:09:45.642 Test: blockdev nvme admin passthru ...[2024-07-22 18:16:57.601782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:45.642 [2024-07-22 18:16:57.601844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:45.642 passed 00:09:45.642 Test: blockdev copy ...passed 00:09:45.642 00:09:45.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:45.642 suites 6 6 n/a 0 0 00:09:45.642 tests 138 138 138 0 0 00:09:45.642 asserts 893 893 893 0 n/a 00:09:45.642 00:09:45.642 Elapsed time = 1.298 seconds 00:09:45.642 0 00:09:45.642 18:16:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66526 00:09:45.642 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66526 ']' 00:09:45.642 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66526 00:09:45.642 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:45.642 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.642 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66526 00:09:45.900 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.900 killing process with pid 66526 00:09:45.900 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.900 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66526' 00:09:45.900 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66526 00:09:45.900 18:16:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66526 00:09:46.837 18:16:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:46.837 00:09:46.837 real 0m2.844s 00:09:46.837 user 0m6.861s 00:09:46.837 sys 0m0.429s 00:09:46.837 ************************************ 00:09:46.837 END TEST bdev_bounds 00:09:46.837 ************************************ 00:09:46.837 18:16:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.837 18:16:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:46.837 18:16:58 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:46.837 18:16:58 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:46.837 18:16:58 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:46.837 18:16:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.837 18:16:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:46.837 ************************************ 00:09:46.837 START TEST bdev_nbd 00:09:46.837 ************************************ 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=66585 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 66585 /var/tmp/spdk-nbd.sock 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66585 ']' 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.837 18:16:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:46.837 [2024-07-22 18:16:58.848254] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:46.837 [2024-07-22 18:16:58.848403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.096 [2024-07-22 18:16:59.012465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.354 [2024-07-22 18:16:59.244558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:47.992 18:16:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.271 1+0 records in 00:09:48.271 1+0 records out 00:09:48.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00306791 s, 1.3 MB/s 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:48.271 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.531 1+0 records in 00:09:48.531 1+0 records out 00:09:48.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756241 s, 5.4 MB/s 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:48.531 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.098 1+0 records in 00:09:49.098 1+0 records out 00:09:49.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117333 s, 3.5 MB/s 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:49.098 18:17:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.098 1+0 records in 00:09:49.098 1+0 records out 00:09:49.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00170808 s, 2.4 MB/s 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:49.098 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.665 1+0 records in 00:09:49.665 1+0 records out 00:09:49.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697261 s, 5.9 MB/s 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.665 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.666 1+0 records in 00:09:49.666 1+0 records out 00:09:49.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923923 s, 4.4 MB/s 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:49.666 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd0", 00:09:50.233 "bdev_name": "Nvme0n1" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd1", 00:09:50.233 "bdev_name": "Nvme1n1" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd2", 00:09:50.233 "bdev_name": "Nvme2n1" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd3", 00:09:50.233 "bdev_name": "Nvme2n2" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd4", 00:09:50.233 "bdev_name": "Nvme2n3" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd5", 00:09:50.233 "bdev_name": "Nvme3n1" 00:09:50.233 } 00:09:50.233 ]' 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd0", 00:09:50.233 "bdev_name": "Nvme0n1" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd1", 00:09:50.233 "bdev_name": "Nvme1n1" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd2", 00:09:50.233 "bdev_name": "Nvme2n1" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd3", 00:09:50.233 "bdev_name": "Nvme2n2" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd4", 00:09:50.233 "bdev_name": "Nvme2n3" 00:09:50.233 }, 00:09:50.233 { 00:09:50.233 "nbd_device": "/dev/nbd5", 00:09:50.233 "bdev_name": "Nvme3n1" 00:09:50.233 } 00:09:50.233 ]' 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.233 18:17:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.492 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.750 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:51.008 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:51.008 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:51.008 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:51.009 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.009 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.009 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:51.009 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.009 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.009 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.009 18:17:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:51.267 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:51.267 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:51.267 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:51.267 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.267 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.267 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:51.267 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.268 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.268 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.268 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.526 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.784 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:52.044 18:17:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:52.044 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:52.308 /dev/nbd0 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:52.308 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.308 1+0 records in 00:09:52.308 1+0 records out 00:09:52.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446604 s, 9.2 MB/s 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:52.584 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:52.844 /dev/nbd1 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.844 1+0 records in 00:09:52.844 1+0 records out 00:09:52.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650489 s, 6.3 MB/s 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:52.844 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:53.103 /dev/nbd10 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.103 1+0 records in 00:09:53.103 1+0 records out 00:09:53.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442523 s, 9.3 MB/s 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:53.103 18:17:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:53.361 /dev/nbd11 00:09:53.361 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.362 1+0 records in 00:09:53.362 1+0 records out 00:09:53.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122174 s, 3.4 MB/s 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:53.362 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:53.621 /dev/nbd12 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.621 1+0 records in 00:09:53.621 1+0 records out 00:09:53.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145724 s, 2.8 MB/s 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:53.621 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:53.880 /dev/nbd13 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.880 1+0 records in 00:09:53.880 1+0 records out 00:09:53.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808907 s, 5.1 MB/s 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.880 18:17:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd0", 00:09:54.140 "bdev_name": "Nvme0n1" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd1", 00:09:54.140 "bdev_name": "Nvme1n1" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd10", 00:09:54.140 "bdev_name": "Nvme2n1" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd11", 00:09:54.140 "bdev_name": "Nvme2n2" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd12", 00:09:54.140 "bdev_name": "Nvme2n3" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd13", 00:09:54.140 "bdev_name": "Nvme3n1" 00:09:54.140 } 00:09:54.140 ]' 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd0", 00:09:54.140 "bdev_name": "Nvme0n1" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd1", 00:09:54.140 "bdev_name": "Nvme1n1" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd10", 00:09:54.140 "bdev_name": "Nvme2n1" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd11", 00:09:54.140 "bdev_name": "Nvme2n2" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd12", 00:09:54.140 "bdev_name": "Nvme2n3" 00:09:54.140 }, 00:09:54.140 { 00:09:54.140 "nbd_device": "/dev/nbd13", 00:09:54.140 "bdev_name": "Nvme3n1" 00:09:54.140 } 00:09:54.140 ]' 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:54.140 /dev/nbd1 00:09:54.140 /dev/nbd10 00:09:54.140 /dev/nbd11 00:09:54.140 /dev/nbd12 00:09:54.140 /dev/nbd13' 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:54.140 /dev/nbd1 00:09:54.140 /dev/nbd10 00:09:54.140 /dev/nbd11 00:09:54.140 /dev/nbd12 00:09:54.140 /dev/nbd13' 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:54.140 256+0 records in 00:09:54.140 256+0 records out 00:09:54.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692666 s, 151 MB/s 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.140 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:54.398 256+0 records in 00:09:54.398 256+0 records out 00:09:54.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154357 s, 6.8 MB/s 00:09:54.398 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.398 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:54.657 256+0 records in 00:09:54.657 256+0 records out 00:09:54.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148788 s, 7.0 MB/s 00:09:54.657 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.657 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:54.657 256+0 records in 00:09:54.657 256+0 records out 00:09:54.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168472 s, 6.2 MB/s 00:09:54.657 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.657 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:54.916 256+0 records in 00:09:54.916 256+0 records out 00:09:54.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146195 s, 7.2 MB/s 00:09:54.916 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.916 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:54.916 256+0 records in 00:09:54.916 256+0 records out 00:09:54.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176356 s, 5.9 MB/s 00:09:54.916 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.916 18:17:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:55.175 256+0 records in 00:09:55.175 256+0 records out 00:09:55.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173743 s, 6.0 MB/s 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.175 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.464 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.747 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.006 18:17:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.573 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.140 18:17:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:57.140 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:57.706 malloc_lvol_verify 00:09:57.706 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:57.706 c8dce095-54a6-4eea-8b66-3c0defbea76d 00:09:57.706 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:57.963 60c83e9b-4c3e-4f9a-b73b-600e07e5c29e 00:09:57.963 18:17:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:58.221 /dev/nbd0 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:58.221 mke2fs 1.46.5 (30-Dec-2021) 00:09:58.221 Discarding device blocks: 0/4096 done 00:09:58.221 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:58.221 00:09:58.221 Allocating group tables: 0/1 done 00:09:58.221 Writing inode tables: 0/1 done 00:09:58.221 Creating journal (1024 blocks): done 00:09:58.221 Writing superblocks and filesystem accounting information: 0/1 done 00:09:58.221 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.221 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 66585 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66585 ']' 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66585 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66585 00:09:58.479 killing process with pid 66585 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66585' 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66585 00:09:58.479 18:17:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66585 00:09:59.853 18:17:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:59.853 00:09:59.853 real 0m12.949s 00:09:59.853 user 0m18.227s 00:09:59.853 sys 0m4.168s 00:09:59.853 ************************************ 00:09:59.853 END TEST bdev_nbd 00:09:59.853 ************************************ 00:09:59.853 18:17:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.853 18:17:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:59.853 18:17:11 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:59.853 18:17:11 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:59.853 18:17:11 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:09:59.853 skipping fio tests on NVMe due to multi-ns failures. 00:09:59.853 18:17:11 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:59.853 18:17:11 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:59.853 18:17:11 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:59.853 18:17:11 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:59.853 18:17:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.853 18:17:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.853 ************************************ 00:09:59.853 START TEST bdev_verify 00:09:59.853 ************************************ 00:09:59.853 18:17:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:59.853 [2024-07-22 18:17:11.813714] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:59.853 [2024-07-22 18:17:11.813899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66991 ] 00:10:00.111 [2024-07-22 18:17:11.993108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.370 [2024-07-22 18:17:12.279065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.370 [2024-07-22 18:17:12.279078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.305 Running I/O for 5 seconds... 00:10:06.575 00:10:06.575 Latency(us) 00:10:06.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.575 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:06.575 Verification LBA range: start 0x0 length 0xbd0bd 00:10:06.575 Nvme0n1 : 5.06 1493.88 5.84 0.00 0.00 85443.64 17039.36 77213.32 00:10:06.575 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:06.575 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:06.575 Nvme0n1 : 5.06 1466.81 5.73 0.00 0.00 87072.21 11796.48 79596.45 00:10:06.575 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:06.575 Verification LBA range: start 0x0 length 0xa0000 00:10:06.575 Nvme1n1 : 5.06 1493.34 5.83 0.00 0.00 85302.40 18350.08 73876.95 00:10:06.575 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:06.575 Verification LBA range: start 0xa0000 length 0xa0000 00:10:06.575 Nvme1n1 : 5.06 1466.14 5.73 0.00 0.00 86930.26 12630.57 76736.70 00:10:06.575 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:06.575 Verification LBA range: start 0x0 length 0x80000 00:10:06.575 Nvme2n1 : 5.06 1492.80 5.83 0.00 0.00 85150.47 17992.61 73400.32 00:10:06.575 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:06.576 Verification LBA range: start 0x80000 length 0x80000 00:10:06.576 Nvme2n1 : 5.06 1465.75 5.73 0.00 0.00 86770.47 12571.00 74830.20 00:10:06.576 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:06.576 Verification LBA range: start 0x0 length 0x80000 00:10:06.576 Nvme2n2 : 5.06 1492.30 5.83 0.00 0.00 84974.60 17039.36 70063.94 00:10:06.576 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:06.576 Verification LBA range: start 0x80000 length 0x80000 00:10:06.576 Nvme2n2 : 5.07 1465.27 5.72 0.00 0.00 86635.72 12213.53 71493.82 00:10:06.576 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:06.576 Verification LBA range: start 0x0 length 0x80000 00:10:06.576 Nvme2n3 : 5.07 1501.66 5.87 0.00 0.00 84294.41 3902.37 72447.07 00:10:06.576 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:06.576 Verification LBA range: start 0x80000 length 0x80000 00:10:06.576 Nvme2n3 : 5.07 1464.79 5.72 0.00 0.00 86471.61 12451.84 74830.20 00:10:06.576 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:06.576 Verification LBA range: start 0x0 length 0x20000 00:10:06.576 Nvme3n1 : 5.08 1511.13 5.90 0.00 0.00 83652.70 7596.22 76736.70 00:10:06.576 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:06.576 Verification LBA range: start 0x20000 length 0x20000 00:10:06.576 Nvme3n1 : 5.07 1464.33 5.72 0.00 0.00 86313.28 12451.84 78166.57 00:10:06.576 =================================================================================================================== 00:10:06.576 Total : 17778.20 69.45 0.00 0.00 85738.88 3902.37 79596.45 00:10:07.522 00:10:07.522 real 0m7.766s 00:10:07.522 user 0m13.999s 00:10:07.522 sys 0m0.339s 00:10:07.522 18:17:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.522 18:17:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:07.522 ************************************ 00:10:07.522 END TEST bdev_verify 00:10:07.522 ************************************ 00:10:07.781 18:17:19 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:07.781 18:17:19 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:07.781 18:17:19 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:07.781 18:17:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.781 18:17:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.781 ************************************ 00:10:07.781 START TEST bdev_verify_big_io 00:10:07.781 ************************************ 00:10:07.781 18:17:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:07.781 [2024-07-22 18:17:19.653186] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:07.781 [2024-07-22 18:17:19.653394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67095 ] 00:10:08.042 [2024-07-22 18:17:19.829645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.042 [2024-07-22 18:17:20.053500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.042 [2024-07-22 18:17:20.053514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.984 Running I/O for 5 seconds... 00:10:15.566 00:10:15.566 Latency(us) 00:10:15.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.566 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x0 length 0xbd0b 00:10:15.566 Nvme0n1 : 5.63 130.79 8.17 0.00 0.00 946277.75 20375.74 968502.92 00:10:15.566 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:15.566 Nvme0n1 : 5.71 123.02 7.69 0.00 0.00 1006496.35 36938.47 953250.91 00:10:15.566 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x0 length 0xa000 00:10:15.566 Nvme1n1 : 5.74 133.86 8.37 0.00 0.00 901958.44 90558.84 804543.77 00:10:15.566 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0xa000 length 0xa000 00:10:15.566 Nvme1n1 : 5.72 123.12 7.69 0.00 0.00 979775.39 71493.82 865551.83 00:10:15.566 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x0 length 0x8000 00:10:15.566 Nvme2n1 : 5.74 133.80 8.36 0.00 0.00 873422.66 106764.10 804543.77 00:10:15.566 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x8000 length 0x8000 00:10:15.566 Nvme2n1 : 5.80 127.57 7.97 0.00 0.00 922376.69 23354.65 903681.86 00:10:15.566 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x0 length 0x8000 00:10:15.566 Nvme2n2 : 5.86 135.25 8.45 0.00 0.00 840977.89 24903.68 1616713.54 00:10:15.566 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x8000 length 0x8000 00:10:15.566 Nvme2n2 : 5.80 127.03 7.94 0.00 0.00 897666.57 23116.33 1128649.08 00:10:15.566 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x0 length 0x8000 00:10:15.566 Nvme2n3 : 5.87 139.10 8.69 0.00 0.00 795961.00 44326.17 1639591.56 00:10:15.566 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.566 Verification LBA range: start 0x8000 length 0x8000 00:10:15.566 Nvme2n3 : 5.81 132.30 8.27 0.00 0.00 846424.13 55765.18 941811.90 00:10:15.566 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.567 Verification LBA range: start 0x0 length 0x2000 00:10:15.567 Nvme3n1 : 5.90 155.02 9.69 0.00 0.00 696212.56 875.05 1677721.60 00:10:15.567 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.567 Verification LBA range: start 0x2000 length 0x2000 00:10:15.567 Nvme3n1 : 5.87 148.94 9.31 0.00 0.00 733512.05 4855.62 968502.92 00:10:15.567 =================================================================================================================== 00:10:15.567 Total : 1609.78 100.61 0.00 0.00 863119.32 875.05 1677721.60 00:10:16.944 00:10:16.944 real 0m9.038s 00:10:16.944 user 0m16.554s 00:10:16.944 sys 0m0.374s 00:10:16.944 18:17:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.944 18:17:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:16.944 ************************************ 00:10:16.944 END TEST bdev_verify_big_io 00:10:16.944 ************************************ 00:10:16.944 18:17:28 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:16.944 18:17:28 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:16.944 18:17:28 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:16.944 18:17:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.944 18:17:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.944 ************************************ 00:10:16.944 START TEST bdev_write_zeroes 00:10:16.944 ************************************ 00:10:16.944 18:17:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:16.944 [2024-07-22 18:17:28.745247] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:16.944 [2024-07-22 18:17:28.745435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67210 ] 00:10:16.944 [2024-07-22 18:17:28.918987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.202 [2024-07-22 18:17:29.150949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.137 Running I/O for 1 seconds... 00:10:19.072 00:10:19.072 Latency(us) 00:10:19.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.072 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:19.072 Nvme0n1 : 1.02 8051.66 31.45 0.00 0.00 15839.99 12213.53 25141.99 00:10:19.072 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:19.072 Nvme1n1 : 1.02 8038.91 31.40 0.00 0.00 15837.91 12630.57 24427.05 00:10:19.072 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:19.072 Nvme2n1 : 1.02 8026.81 31.35 0.00 0.00 15795.04 12571.00 24784.52 00:10:19.072 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:19.072 Nvme2n2 : 1.02 8065.44 31.51 0.00 0.00 15707.34 9175.04 24903.68 00:10:19.072 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:19.072 Nvme2n3 : 1.03 8053.35 31.46 0.00 0.00 15679.29 8043.05 24427.05 00:10:19.072 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:19.072 Nvme3n1 : 1.03 8041.21 31.41 0.00 0.00 15656.02 6970.65 23950.43 00:10:19.072 =================================================================================================================== 00:10:19.072 Total : 48277.38 188.58 0.00 0.00 15752.32 6970.65 25141.99 00:10:20.348 00:10:20.348 real 0m3.490s 00:10:20.348 user 0m3.068s 00:10:20.348 sys 0m0.299s 00:10:20.348 18:17:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.348 18:17:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:20.348 ************************************ 00:10:20.348 END TEST bdev_write_zeroes 00:10:20.348 ************************************ 00:10:20.348 18:17:32 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:20.348 18:17:32 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:20.348 18:17:32 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:20.348 18:17:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.348 18:17:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:20.348 ************************************ 00:10:20.348 START TEST bdev_json_nonenclosed 00:10:20.348 ************************************ 00:10:20.348 18:17:32 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:20.348 [2024-07-22 18:17:32.285868] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:20.348 [2024-07-22 18:17:32.286078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67267 ] 00:10:20.607 [2024-07-22 18:17:32.460871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.866 [2024-07-22 18:17:32.701971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.866 [2024-07-22 18:17:32.702128] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:20.866 [2024-07-22 18:17:32.702159] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:20.866 [2024-07-22 18:17:32.702176] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:21.138 00:10:21.138 real 0m0.964s 00:10:21.138 user 0m0.693s 00:10:21.138 sys 0m0.164s 00:10:21.138 18:17:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:10:21.138 18:17:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.139 18:17:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:21.139 ************************************ 00:10:21.139 END TEST bdev_json_nonenclosed 00:10:21.139 ************************************ 00:10:21.443 18:17:33 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:10:21.443 18:17:33 blockdev_nvme -- bdev/blockdev.sh@781 -- # true 00:10:21.443 18:17:33 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:21.443 18:17:33 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:21.443 18:17:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.443 18:17:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.443 ************************************ 00:10:21.443 START TEST bdev_json_nonarray 00:10:21.443 ************************************ 00:10:21.443 18:17:33 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:21.443 [2024-07-22 18:17:33.301440] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:21.443 [2024-07-22 18:17:33.301646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67294 ] 00:10:21.701 [2024-07-22 18:17:33.475819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.960 [2024-07-22 18:17:33.719291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.960 [2024-07-22 18:17:33.719417] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:21.960 [2024-07-22 18:17:33.719449] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:21.960 [2024-07-22 18:17:33.719473] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:22.219 00:10:22.219 real 0m0.949s 00:10:22.219 user 0m0.687s 00:10:22.219 sys 0m0.155s 00:10:22.219 18:17:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:10:22.219 18:17:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.219 18:17:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:22.219 ************************************ 00:10:22.219 END TEST bdev_json_nonarray 00:10:22.219 ************************************ 00:10:22.219 18:17:34 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@784 -- # true 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:22.219 18:17:34 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:22.219 00:10:22.219 real 0m45.147s 00:10:22.219 user 1m6.499s 00:10:22.219 sys 0m7.197s 00:10:22.219 18:17:34 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.219 18:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.219 ************************************ 00:10:22.219 END TEST blockdev_nvme 00:10:22.219 ************************************ 00:10:22.219 18:17:34 -- common/autotest_common.sh@1142 -- # return 0 00:10:22.479 18:17:34 -- spdk/autotest.sh@213 -- # uname -s 00:10:22.479 18:17:34 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:10:22.479 18:17:34 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:22.479 18:17:34 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.479 18:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.479 18:17:34 -- common/autotest_common.sh@10 -- # set +x 00:10:22.479 ************************************ 00:10:22.479 START TEST blockdev_nvme_gpt 00:10:22.479 ************************************ 00:10:22.479 18:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:22.479 * Looking for test storage... 00:10:22.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67374 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:22.479 18:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67374 00:10:22.479 18:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67374 ']' 00:10:22.479 18:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.479 18:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.480 18:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.480 18:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.480 18:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:22.480 [2024-07-22 18:17:34.441968] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:22.480 [2024-07-22 18:17:34.442177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67374 ] 00:10:22.738 [2024-07-22 18:17:34.604877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.997 [2024-07-22 18:17:34.855462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.933 18:17:35 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.933 18:17:35 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:10:23.934 18:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:23.934 18:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:10:23.934 18:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:24.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.192 Waiting for block devices as requested 00:10:24.450 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.450 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.450 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.709 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.009 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:30.009 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:30.009 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:10:30.009 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:10:30.009 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:10:30.009 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:30.009 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:10:30.009 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:30.010 BYT; 00:10:30.010 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:30.010 BYT; 00:10:30.010 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:30.010 18:17:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:30.010 18:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:30.976 The operation has completed successfully. 00:10:30.976 18:17:42 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:31.910 The operation has completed successfully. 00:10:31.910 18:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:32.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:33.044 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.044 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.044 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.044 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.044 18:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:33.044 18:17:44 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.044 18:17:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.044 [] 00:10:33.044 18:17:44 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.044 18:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:33.044 18:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:33.044 18:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:33.044 18:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:33.045 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:33.045 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.045 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:33.612 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:33.613 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9c8dcb53-189c-4302-ab7e-baa6ef713358"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9c8dcb53-189c-4302-ab7e-baa6ef713358",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b1693bd4-9f12-4b89-a1ff-d4a2de2b6453"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b1693bd4-9f12-4b89-a1ff-d4a2de2b6453",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b5c3c54b-7980-4f4f-b55f-6e8fbd64f5c4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b5c3c54b-7980-4f4f-b55f-6e8fbd64f5c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "949fbac9-87e5-4d9b-9b09-874edfe9b066"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "949fbac9-87e5-4d9b-9b09-874edfe9b066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "59363c58-01a6-4fd0-bddf-ca0959a1b1b3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "59363c58-01a6-4fd0-bddf-ca0959a1b1b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:33.613 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:33.613 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:33.613 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:33.613 18:17:45 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 67374 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67374 ']' 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67374 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67374 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:33.613 killing process with pid 67374 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67374' 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67374 00:10:33.613 18:17:45 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67374 00:10:36.144 18:17:47 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:36.144 18:17:47 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:36.144 18:17:47 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:36.144 18:17:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.144 18:17:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:36.144 ************************************ 00:10:36.144 START TEST bdev_hello_world 00:10:36.144 ************************************ 00:10:36.144 18:17:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:36.144 [2024-07-22 18:17:47.937959] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:36.144 [2024-07-22 18:17:47.938145] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68013 ] 00:10:36.144 [2024-07-22 18:17:48.115112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.402 [2024-07-22 18:17:48.365726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.334 [2024-07-22 18:17:49.026554] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:37.334 [2024-07-22 18:17:49.026631] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:37.334 [2024-07-22 18:17:49.026662] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:37.334 [2024-07-22 18:17:49.029940] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:37.334 [2024-07-22 18:17:49.030590] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:37.334 [2024-07-22 18:17:49.030629] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:37.334 [2024-07-22 18:17:49.030821] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:37.334 00:10:37.334 [2024-07-22 18:17:49.030855] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:38.270 00:10:38.270 real 0m2.356s 00:10:38.270 user 0m1.950s 00:10:38.270 sys 0m0.293s 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:38.270 ************************************ 00:10:38.270 END TEST bdev_hello_world 00:10:38.270 ************************************ 00:10:38.270 18:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:38.270 18:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:38.270 18:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.270 18:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.270 18:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:38.270 ************************************ 00:10:38.270 START TEST bdev_bounds 00:10:38.270 ************************************ 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=68062 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:38.270 Process bdevio pid: 68062 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 68062' 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 68062 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68062 ']' 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.270 18:17:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 [2024-07-22 18:17:50.330019] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:38.530 [2024-07-22 18:17:50.330192] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68062 ] 00:10:38.530 [2024-07-22 18:17:50.494193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:38.794 [2024-07-22 18:17:50.736711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.794 [2024-07-22 18:17:50.736821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.794 [2024-07-22 18:17:50.736840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.730 18:17:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.730 18:17:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:10:39.730 18:17:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:39.730 I/O targets: 00:10:39.730 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:39.730 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:39.730 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:39.730 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:39.730 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:39.730 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:39.730 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:39.730 00:10:39.730 00:10:39.730 CUnit - A unit testing framework for C - Version 2.1-3 00:10:39.730 http://cunit.sourceforge.net/ 00:10:39.730 00:10:39.730 00:10:39.730 Suite: bdevio tests on: Nvme3n1 00:10:39.730 Test: blockdev write read block ...passed 00:10:39.730 Test: blockdev write zeroes read block ...passed 00:10:39.730 Test: blockdev write zeroes read no split ...passed 00:10:39.730 Test: blockdev write zeroes read split ...passed 00:10:39.730 Test: blockdev write zeroes read split partial ...passed 00:10:39.730 Test: blockdev reset ...[2024-07-22 18:17:51.624548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:10:39.730 passed 00:10:39.730 Test: blockdev write read 8 blocks ...[2024-07-22 18:17:51.628520] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.730 passed 00:10:39.730 Test: blockdev write read size > 128k ...passed 00:10:39.730 Test: blockdev write read invalid size ...passed 00:10:39.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.730 Test: blockdev write read max offset ...passed 00:10:39.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.730 Test: blockdev writev readv 8 blocks ...passed 00:10:39.730 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.730 Test: blockdev writev readv block ...passed 00:10:39.730 Test: blockdev writev readv size > 128k ...passed 00:10:39.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.730 Test: blockdev comparev and writev ...[2024-07-22 18:17:51.637708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276206000 len:0x1000 00:10:39.730 [2024-07-22 18:17:51.637771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:39.730 passed 00:10:39.730 Test: blockdev nvme passthru rw ...passed 00:10:39.730 Test: blockdev nvme passthru vendor specific ...passed 00:10:39.730 Test: blockdev nvme admin passthru ...[2024-07-22 18:17:51.638539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:39.730 [2024-07-22 18:17:51.638590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:39.730 passed 00:10:39.730 Test: blockdev copy ...passed 00:10:39.730 Suite: bdevio tests on: Nvme2n3 00:10:39.730 Test: blockdev write read block ...passed 00:10:39.730 Test: blockdev write zeroes read block ...passed 00:10:39.731 Test: blockdev write zeroes read no split ...passed 00:10:39.731 Test: blockdev write zeroes read split ...passed 00:10:39.731 Test: blockdev write zeroes read split partial ...passed 00:10:39.731 Test: blockdev reset ...[2024-07-22 18:17:51.716863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:39.731 [2024-07-22 18:17:51.721234] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.731 passed 00:10:39.731 Test: blockdev write read 8 blocks ...passed 00:10:39.731 Test: blockdev write read size > 128k ...passed 00:10:39.731 Test: blockdev write read invalid size ...passed 00:10:39.731 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.731 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.731 Test: blockdev write read max offset ...passed 00:10:39.731 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.731 Test: blockdev writev readv 8 blocks ...passed 00:10:39.731 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.731 Test: blockdev writev readv block ...passed 00:10:39.731 Test: blockdev writev readv size > 128k ...passed 00:10:39.731 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.731 Test: blockdev comparev and writev ...[2024-07-22 18:17:51.730835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28f63c000 len:0x1000 00:10:39.731 [2024-07-22 18:17:51.730895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:39.731 passed 00:10:39.731 Test: blockdev nvme passthru rw ...passed 00:10:39.731 Test: blockdev nvme passthru vendor specific ...[2024-07-22 18:17:51.731819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:39.731 [2024-07-22 18:17:51.731864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:39.731 passed 00:10:39.731 Test: blockdev nvme admin passthru ...passed 00:10:39.731 Test: blockdev copy ...passed 00:10:39.731 Suite: bdevio tests on: Nvme2n2 00:10:39.731 Test: blockdev write read block ...passed 00:10:39.731 Test: blockdev write zeroes read block ...passed 00:10:39.990 Test: blockdev write zeroes read no split ...passed 00:10:39.990 Test: blockdev write zeroes read split ...passed 00:10:39.990 Test: blockdev write zeroes read split partial ...passed 00:10:39.990 Test: blockdev reset ...[2024-07-22 18:17:51.809934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:39.990 [2024-07-22 18:17:51.814250] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.990 passed 00:10:39.990 Test: blockdev write read 8 blocks ...passed 00:10:39.990 Test: blockdev write read size > 128k ...passed 00:10:39.990 Test: blockdev write read invalid size ...passed 00:10:39.990 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.990 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.990 Test: blockdev write read max offset ...passed 00:10:39.990 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.990 Test: blockdev writev readv 8 blocks ...passed 00:10:39.990 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.990 Test: blockdev writev readv block ...passed 00:10:39.990 Test: blockdev writev readv size > 128k ...passed 00:10:39.990 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.990 Test: blockdev comparev and writev ...[2024-07-22 18:17:51.824959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28f636000 len:0x1000 00:10:39.990 [2024-07-22 18:17:51.825030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:39.990 passed 00:10:39.990 Test: blockdev nvme passthru rw ...passed 00:10:39.990 Test: blockdev nvme passthru vendor specific ...passed 00:10:39.990 Test: blockdev nvme admin passthru ...[2024-07-22 18:17:51.825797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:39.990 [2024-07-22 18:17:51.825846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:39.990 passed 00:10:39.990 Test: blockdev copy ...passed 00:10:39.990 Suite: bdevio tests on: Nvme2n1 00:10:39.990 Test: blockdev write read block ...passed 00:10:39.990 Test: blockdev write zeroes read block ...passed 00:10:39.990 Test: blockdev write zeroes read no split ...passed 00:10:39.990 Test: blockdev write zeroes read split ...passed 00:10:39.990 Test: blockdev write zeroes read split partial ...passed 00:10:39.990 Test: blockdev reset ...[2024-07-22 18:17:51.899922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:39.990 [2024-07-22 18:17:51.904291] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.990 passed 00:10:39.990 Test: blockdev write read 8 blocks ...passed 00:10:39.990 Test: blockdev write read size > 128k ...passed 00:10:39.990 Test: blockdev write read invalid size ...passed 00:10:39.990 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.990 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.990 Test: blockdev write read max offset ...passed 00:10:39.990 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.990 Test: blockdev writev readv 8 blocks ...passed 00:10:39.990 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.990 Test: blockdev writev readv block ...passed 00:10:39.990 Test: blockdev writev readv size > 128k ...passed 00:10:39.990 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.990 Test: blockdev comparev and writev ...[2024-07-22 18:17:51.913735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28f632000 len:0x1000 00:10:39.990 [2024-07-22 18:17:51.913797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:39.990 passed 00:10:39.990 Test: blockdev nvme passthru rw ...passed 00:10:39.990 Test: blockdev nvme passthru vendor specific ...[2024-07-22 18:17:51.914557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:39.990 [2024-07-22 18:17:51.914600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:39.990 passed 00:10:39.990 Test: blockdev nvme admin passthru ...passed 00:10:39.990 Test: blockdev copy ...passed 00:10:39.990 Suite: bdevio tests on: Nvme1n1p2 00:10:39.990 Test: blockdev write read block ...passed 00:10:39.990 Test: blockdev write zeroes read block ...passed 00:10:39.990 Test: blockdev write zeroes read no split ...passed 00:10:39.990 Test: blockdev write zeroes read split ...passed 00:10:39.990 Test: blockdev write zeroes read split partial ...passed 00:10:39.990 Test: blockdev reset ...[2024-07-22 18:17:51.994843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:39.990 passed 00:10:39.990 Test: blockdev write read 8 blocks ...[2024-07-22 18:17:51.998661] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.990 passed 00:10:39.990 Test: blockdev write read size > 128k ...passed 00:10:39.990 Test: blockdev write read invalid size ...passed 00:10:39.990 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.990 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.990 Test: blockdev write read max offset ...passed 00:10:39.990 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.990 Test: blockdev writev readv 8 blocks ...passed 00:10:39.990 Test: blockdev writev readv 30 x 1block ...passed 00:10:40.250 Test: blockdev writev readv block ...passed 00:10:40.250 Test: blockdev writev readv size > 128k ...passed 00:10:40.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:40.250 Test: blockdev comparev and writev ...[2024-07-22 18:17:52.007795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x28f62e000 len:0x1000 00:10:40.250 [2024-07-22 18:17:52.007867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:40.250 passed 00:10:40.250 Test: blockdev nvme passthru rw ...passed 00:10:40.250 Test: blockdev nvme passthru vendor specific ...passed 00:10:40.250 Test: blockdev nvme admin passthru ...passed 00:10:40.250 Test: blockdev copy ...passed 00:10:40.250 Suite: bdevio tests on: Nvme1n1p1 00:10:40.250 Test: blockdev write read block ...passed 00:10:40.250 Test: blockdev write zeroes read block ...passed 00:10:40.250 Test: blockdev write zeroes read no split ...passed 00:10:40.250 Test: blockdev write zeroes read split ...passed 00:10:40.250 Test: blockdev write zeroes read split partial ...passed 00:10:40.250 Test: blockdev reset ...[2024-07-22 18:17:52.090436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:40.250 [2024-07-22 18:17:52.094219] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:40.250 passed 00:10:40.250 Test: blockdev write read 8 blocks ...passed 00:10:40.250 Test: blockdev write read size > 128k ...passed 00:10:40.250 Test: blockdev write read invalid size ...passed 00:10:40.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:40.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:40.250 Test: blockdev write read max offset ...passed 00:10:40.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:40.250 Test: blockdev writev readv 8 blocks ...passed 00:10:40.250 Test: blockdev writev readv 30 x 1block ...passed 00:10:40.250 Test: blockdev writev readv block ...passed 00:10:40.250 Test: blockdev writev readv size > 128k ...passed 00:10:40.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:40.250 Test: blockdev comparev and writev ...[2024-07-22 18:17:52.105386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27de0e000 len:0x1000 00:10:40.250 [2024-07-22 18:17:52.105597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:10:40.250 Test: blockdev nvme passthru rw ...passed 00:10:40.250 Test: blockdev nvme passthru vendor specific ...passed 00:10:40.250 Test: blockdev nvme admin passthru ...passed 00:10:40.250 Test: blockdev copy ...0 sqhd:0018 p:1 m:0 dnr:1 00:10:40.250 passed 00:10:40.250 Suite: bdevio tests on: Nvme0n1 00:10:40.250 Test: blockdev write read block ...passed 00:10:40.250 Test: blockdev write zeroes read block ...passed 00:10:40.250 Test: blockdev write zeroes read no split ...passed 00:10:40.250 Test: blockdev write zeroes read split ...passed 00:10:40.250 Test: blockdev write zeroes read split partial ...passed 00:10:40.250 Test: blockdev reset ...[2024-07-22 18:17:52.209479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:40.250 passed 00:10:40.250 Test: blockdev write read 8 blocks ...[2024-07-22 18:17:52.213243] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:40.250 passed 00:10:40.250 Test: blockdev write read size > 128k ...passed 00:10:40.250 Test: blockdev write read invalid size ...passed 00:10:40.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:40.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:40.250 Test: blockdev write read max offset ...passed 00:10:40.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:40.250 Test: blockdev writev readv 8 blocks ...passed 00:10:40.250 Test: blockdev writev readv 30 x 1block ...passed 00:10:40.250 Test: blockdev writev readv block ...passed 00:10:40.250 Test: blockdev writev readv size > 128k ...passed 00:10:40.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:40.250 Test: blockdev comparev and writev ...passed 00:10:40.250 Test: blockdev nvme passthru rw ...[2024-07-22 18:17:52.221211] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:40.250 separate metadata which is not supported yet. 00:10:40.250 passed 00:10:40.250 Test: blockdev nvme passthru vendor specific ...passed 00:10:40.250 Test: blockdev nvme admin passthru ...[2024-07-22 18:17:52.221845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:40.250 [2024-07-22 18:17:52.221904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:40.250 passed 00:10:40.250 Test: blockdev copy ...passed 00:10:40.250 00:10:40.250 Run Summary: Type Total Ran Passed Failed Inactive 00:10:40.250 suites 7 7 n/a 0 0 00:10:40.250 tests 161 161 161 0 0 00:10:40.250 asserts 1025 1025 1025 0 n/a 00:10:40.250 00:10:40.250 Elapsed time = 1.830 seconds 00:10:40.250 0 00:10:40.250 18:17:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 68062 00:10:40.250 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68062 ']' 00:10:40.250 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68062 00:10:40.250 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:10:40.250 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.250 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68062 00:10:40.509 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.509 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.509 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68062' 00:10:40.509 killing process with pid 68062 00:10:40.509 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68062 00:10:40.509 18:17:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68062 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:41.911 00:10:41.911 real 0m3.262s 00:10:41.911 user 0m8.112s 00:10:41.911 sys 0m0.454s 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:41.911 ************************************ 00:10:41.911 END TEST bdev_bounds 00:10:41.911 ************************************ 00:10:41.911 18:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:41.911 18:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:41.911 18:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:41.911 18:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.911 18:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:41.911 ************************************ 00:10:41.911 START TEST bdev_nbd 00:10:41.911 ************************************ 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=68127 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 68127 /var/tmp/spdk-nbd.sock 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68127 ']' 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.911 18:17:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:41.911 [2024-07-22 18:17:53.669894] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:41.911 [2024-07-22 18:17:53.670296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.911 [2024-07-22 18:17:53.848471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.170 [2024-07-22 18:17:54.092926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:42.811 18:17:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:43.071 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.331 1+0 records in 00:10:43.331 1+0 records out 00:10:43.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644449 s, 6.4 MB/s 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:43.331 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:43.590 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:43.590 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:43.590 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:43.590 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.591 1+0 records in 00:10:43.591 1+0 records out 00:10:43.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461634 s, 8.9 MB/s 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:43.591 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:10:43.849 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:43.849 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:43.849 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:43.849 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.849 1+0 records in 00:10:43.849 1+0 records out 00:10:43.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742702 s, 5.5 MB/s 00:10:43.849 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.850 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:43.850 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.850 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:43.850 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:43.850 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:43.850 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:43.850 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.108 1+0 records in 00:10:44.108 1+0 records out 00:10:44.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939678 s, 4.4 MB/s 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:44.108 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.109 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.109 18:17:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:44.109 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.109 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:44.109 18:17:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.367 1+0 records in 00:10:44.367 1+0 records out 00:10:44.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608104 s, 6.7 MB/s 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:44.367 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.626 1+0 records in 00:10:44.626 1+0 records out 00:10:44.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000835444 s, 4.9 MB/s 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:44.626 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.885 1+0 records in 00:10:44.885 1+0 records out 00:10:44.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996558 s, 4.1 MB/s 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:44.885 18:17:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:45.145 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd0", 00:10:45.145 "bdev_name": "Nvme0n1" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd1", 00:10:45.145 "bdev_name": "Nvme1n1p1" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd2", 00:10:45.145 "bdev_name": "Nvme1n1p2" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd3", 00:10:45.145 "bdev_name": "Nvme2n1" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd4", 00:10:45.145 "bdev_name": "Nvme2n2" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd5", 00:10:45.145 "bdev_name": "Nvme2n3" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd6", 00:10:45.145 "bdev_name": "Nvme3n1" 00:10:45.145 } 00:10:45.145 ]' 00:10:45.145 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:45.145 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd0", 00:10:45.145 "bdev_name": "Nvme0n1" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd1", 00:10:45.145 "bdev_name": "Nvme1n1p1" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd2", 00:10:45.145 "bdev_name": "Nvme1n1p2" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd3", 00:10:45.145 "bdev_name": "Nvme2n1" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd4", 00:10:45.145 "bdev_name": "Nvme2n2" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd5", 00:10:45.145 "bdev_name": "Nvme2n3" 00:10:45.145 }, 00:10:45.145 { 00:10:45.145 "nbd_device": "/dev/nbd6", 00:10:45.145 "bdev_name": "Nvme3n1" 00:10:45.145 } 00:10:45.145 ]' 00:10:45.145 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:45.404 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:45.404 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.404 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:45.404 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:45.404 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:45.404 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.404 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.663 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.922 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.182 18:17:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.441 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.700 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.055 18:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:47.313 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:47.314 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.314 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.314 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:47.314 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:47.314 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:47.573 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:47.832 /dev/nbd0 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:47.832 1+0 records in 00:10:47.832 1+0 records out 00:10:47.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467048 s, 8.8 MB/s 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:47.832 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:48.091 /dev/nbd1 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:48.091 1+0 records in 00:10:48.091 1+0 records out 00:10:48.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575895 s, 7.1 MB/s 00:10:48.091 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.092 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:48.092 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.092 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:48.092 18:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:48.092 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.092 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:48.092 18:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:48.351 /dev/nbd10 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:48.351 1+0 records in 00:10:48.351 1+0 records out 00:10:48.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470327 s, 8.7 MB/s 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:48.351 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:48.611 /dev/nbd11 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:48.611 1+0 records in 00:10:48.611 1+0 records out 00:10:48.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610475 s, 6.7 MB/s 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:48.611 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:48.870 /dev/nbd12 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:48.870 1+0 records in 00:10:48.870 1+0 records out 00:10:48.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631121 s, 6.5 MB/s 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:48.870 18:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:49.130 /dev/nbd13 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:49.389 1+0 records in 00:10:49.389 1+0 records out 00:10:49.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670195 s, 6.1 MB/s 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:49.389 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:49.389 /dev/nbd14 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:49.648 1+0 records in 00:10:49.648 1+0 records out 00:10:49.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000911244 s, 4.5 MB/s 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.648 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd0", 00:10:49.907 "bdev_name": "Nvme0n1" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd1", 00:10:49.907 "bdev_name": "Nvme1n1p1" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd10", 00:10:49.907 "bdev_name": "Nvme1n1p2" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd11", 00:10:49.907 "bdev_name": "Nvme2n1" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd12", 00:10:49.907 "bdev_name": "Nvme2n2" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd13", 00:10:49.907 "bdev_name": "Nvme2n3" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd14", 00:10:49.907 "bdev_name": "Nvme3n1" 00:10:49.907 } 00:10:49.907 ]' 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd0", 00:10:49.907 "bdev_name": "Nvme0n1" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd1", 00:10:49.907 "bdev_name": "Nvme1n1p1" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd10", 00:10:49.907 "bdev_name": "Nvme1n1p2" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd11", 00:10:49.907 "bdev_name": "Nvme2n1" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd12", 00:10:49.907 "bdev_name": "Nvme2n2" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd13", 00:10:49.907 "bdev_name": "Nvme2n3" 00:10:49.907 }, 00:10:49.907 { 00:10:49.907 "nbd_device": "/dev/nbd14", 00:10:49.907 "bdev_name": "Nvme3n1" 00:10:49.907 } 00:10:49.907 ]' 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:49.907 /dev/nbd1 00:10:49.907 /dev/nbd10 00:10:49.907 /dev/nbd11 00:10:49.907 /dev/nbd12 00:10:49.907 /dev/nbd13 00:10:49.907 /dev/nbd14' 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:49.907 /dev/nbd1 00:10:49.907 /dev/nbd10 00:10:49.907 /dev/nbd11 00:10:49.907 /dev/nbd12 00:10:49.907 /dev/nbd13 00:10:49.907 /dev/nbd14' 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:49.907 256+0 records in 00:10:49.907 256+0 records out 00:10:49.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481657 s, 218 MB/s 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.907 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:50.166 256+0 records in 00:10:50.166 256+0 records out 00:10:50.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166286 s, 6.3 MB/s 00:10:50.166 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.166 18:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:50.166 256+0 records in 00:10:50.166 256+0 records out 00:10:50.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190672 s, 5.5 MB/s 00:10:50.166 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.166 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:50.425 256+0 records in 00:10:50.425 256+0 records out 00:10:50.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178912 s, 5.9 MB/s 00:10:50.425 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.425 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:50.684 256+0 records in 00:10:50.684 256+0 records out 00:10:50.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188554 s, 5.6 MB/s 00:10:50.684 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.684 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:50.684 256+0 records in 00:10:50.684 256+0 records out 00:10:50.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159783 s, 6.6 MB/s 00:10:50.684 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.684 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:50.944 256+0 records in 00:10:50.944 256+0 records out 00:10:50.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18154 s, 5.8 MB/s 00:10:50.944 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.944 18:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:51.203 256+0 records in 00:10:51.203 256+0 records out 00:10:51.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189857 s, 5.5 MB/s 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.203 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.461 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.720 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.979 18:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.237 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.496 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.755 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:53.013 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:53.013 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:53.013 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:53.013 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:53.013 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:53.013 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:53.013 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:53.014 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:53.014 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:53.014 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.014 18:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:53.272 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:53.896 malloc_lvol_verify 00:10:53.896 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:53.896 d0ef39bf-e62e-46f6-9c6c-9484a8634c98 00:10:53.896 18:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:54.154 96defb45-e85b-4de1-9e9e-c7ded379b827 00:10:54.154 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:54.413 /dev/nbd0 00:10:54.413 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:54.413 mke2fs 1.46.5 (30-Dec-2021) 00:10:54.413 Discarding device blocks: 0/4096 done 00:10:54.413 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:54.413 00:10:54.413 Allocating group tables: 0/1 done 00:10:54.413 Writing inode tables: 0/1 done 00:10:54.413 Creating journal (1024 blocks): done 00:10:54.413 Writing superblocks and filesystem accounting information: 0/1 done 00:10:54.413 00:10:54.413 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:54.414 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:54.414 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.414 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:54.414 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:54.414 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:54.414 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.414 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 68127 00:10:54.672 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68127 ']' 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68127 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68127 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:54.673 killing process with pid 68127 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68127' 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68127 00:10:54.673 18:18:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68127 00:10:56.051 18:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:56.051 00:10:56.051 real 0m14.358s 00:10:56.051 user 0m20.077s 00:10:56.051 sys 0m4.685s 00:10:56.051 18:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.051 18:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:56.051 ************************************ 00:10:56.051 END TEST bdev_nbd 00:10:56.051 ************************************ 00:10:56.051 18:18:07 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:56.051 18:18:07 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:56.051 18:18:07 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:10:56.051 18:18:07 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:10:56.051 skipping fio tests on NVMe due to multi-ns failures. 00:10:56.051 18:18:07 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:56.051 18:18:07 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:56.051 18:18:07 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:56.051 18:18:07 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:56.051 18:18:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.051 18:18:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:56.051 ************************************ 00:10:56.051 START TEST bdev_verify 00:10:56.051 ************************************ 00:10:56.051 18:18:07 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:56.051 [2024-07-22 18:18:08.064572] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:56.051 [2024-07-22 18:18:08.064776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68573 ] 00:10:56.310 [2024-07-22 18:18:08.233791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:56.578 [2024-07-22 18:18:08.476186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.578 [2024-07-22 18:18:08.476202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.524 Running I/O for 5 seconds... 00:11:02.795 00:11:02.795 Latency(us) 00:11:02.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.795 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x0 length 0xbd0bd 00:11:02.795 Nvme0n1 : 5.10 1255.13 4.90 0.00 0.00 101749.05 20256.58 96278.34 00:11:02.795 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:02.795 Nvme0n1 : 5.09 1218.96 4.76 0.00 0.00 104362.28 15490.33 93418.59 00:11:02.795 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x0 length 0x4ff80 00:11:02.795 Nvme1n1p1 : 5.10 1254.69 4.90 0.00 0.00 101569.86 19422.49 91988.71 00:11:02.795 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:02.795 Nvme1n1p1 : 5.10 1218.38 4.76 0.00 0.00 104185.14 15073.28 89128.96 00:11:02.795 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x0 length 0x4ff7f 00:11:02.795 Nvme1n1p2 : 5.10 1254.26 4.90 0.00 0.00 101421.83 17635.14 88652.33 00:11:02.795 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:02.795 Nvme1n1p2 : 5.11 1226.98 4.79 0.00 0.00 103629.16 12273.11 86745.83 00:11:02.795 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x0 length 0x80000 00:11:02.795 Nvme2n1 : 5.10 1253.86 4.90 0.00 0.00 101252.79 17277.67 85792.58 00:11:02.795 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x80000 length 0x80000 00:11:02.795 Nvme2n1 : 5.11 1226.51 4.79 0.00 0.00 103454.44 12630.57 83886.08 00:11:02.795 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x0 length 0x80000 00:11:02.795 Nvme2n2 : 5.11 1253.44 4.90 0.00 0.00 101089.16 16920.20 89128.96 00:11:02.795 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x80000 length 0x80000 00:11:02.795 Nvme2n2 : 5.12 1226.08 4.79 0.00 0.00 103295.96 12928.47 84362.71 00:11:02.795 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x0 length 0x80000 00:11:02.795 Nvme2n3 : 5.11 1253.01 4.89 0.00 0.00 100922.30 16443.58 93895.21 00:11:02.795 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x80000 length 0x80000 00:11:02.795 Nvme2n3 : 5.12 1225.65 4.79 0.00 0.00 103125.68 13107.20 88652.33 00:11:02.795 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x0 length 0x20000 00:11:02.795 Nvme3n1 : 5.11 1252.58 4.89 0.00 0.00 100736.97 13822.14 96278.34 00:11:02.795 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:02.795 Verification LBA range: start 0x20000 length 0x20000 00:11:02.795 Nvme3n1 : 5.12 1225.22 4.79 0.00 0.00 102946.72 12988.04 93895.21 00:11:02.795 =================================================================================================================== 00:11:02.795 Total : 17344.76 67.75 0.00 0.00 102395.66 12273.11 96278.34 00:11:04.174 00:11:04.174 real 0m7.820s 00:11:04.174 user 0m14.169s 00:11:04.174 sys 0m0.330s 00:11:04.174 18:18:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.174 ************************************ 00:11:04.174 END TEST bdev_verify 00:11:04.174 18:18:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:04.174 ************************************ 00:11:04.174 18:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:04.174 18:18:15 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:04.174 18:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:11:04.174 18:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.174 18:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.174 ************************************ 00:11:04.174 START TEST bdev_verify_big_io 00:11:04.174 ************************************ 00:11:04.174 18:18:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:04.174 [2024-07-22 18:18:15.952246] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:04.174 [2024-07-22 18:18:15.952455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68678 ] 00:11:04.174 [2024-07-22 18:18:16.128828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:04.433 [2024-07-22 18:18:16.373613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.433 [2024-07-22 18:18:16.373629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.368 Running I/O for 5 seconds... 00:11:11.933 00:11:11.933 Latency(us) 00:11:11.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.933 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x0 length 0xbd0b 00:11:11.933 Nvme0n1 : 5.90 97.60 6.10 0.00 0.00 1256956.90 39798.23 1624339.55 00:11:11.933 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:11.933 Nvme0n1 : 5.90 106.97 6.69 0.00 0.00 1137023.92 25856.93 1479445.41 00:11:11.933 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x0 length 0x4ff8 00:11:11.933 Nvme1n1p1 : 5.90 97.57 6.10 0.00 0.00 1223614.22 141081.13 1670095.59 00:11:11.933 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:11.933 Nvme1n1p1 : 5.90 108.64 6.79 0.00 0.00 1090984.88 111053.73 941811.90 00:11:11.933 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x0 length 0x4ff7 00:11:11.933 Nvme1n1p2 : 5.91 108.70 6.79 0.00 0.00 1065658.05 132501.88 1090519.04 00:11:11.933 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:11.933 Nvme1n1p2 : 5.81 100.45 6.28 0.00 0.00 1151916.40 132501.88 1914127.83 00:11:11.933 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x0 length 0x8000 00:11:11.933 Nvme2n1 : 5.91 112.88 7.06 0.00 0.00 1013785.56 89128.96 1189657.13 00:11:11.933 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x8000 length 0x8000 00:11:11.933 Nvme2n1 : 5.90 105.36 6.58 0.00 0.00 1076966.25 86269.21 1952257.86 00:11:11.933 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x0 length 0x8000 00:11:11.933 Nvme2n2 : 5.95 118.31 7.39 0.00 0.00 948486.01 38606.66 1098145.05 00:11:11.933 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x8000 length 0x8000 00:11:11.933 Nvme2n2 : 5.97 111.35 6.96 0.00 0.00 996402.54 47424.23 1982761.89 00:11:11.933 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x0 length 0x8000 00:11:11.933 Nvme2n3 : 5.96 123.60 7.72 0.00 0.00 887183.25 4587.52 1128649.08 00:11:11.933 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x8000 length 0x8000 00:11:11.933 Nvme2n3 : 6.00 119.90 7.49 0.00 0.00 903348.42 21328.99 2013265.92 00:11:11.933 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x0 length 0x2000 00:11:11.933 Nvme3n1 : 5.97 129.20 8.08 0.00 0.00 824732.17 5064.15 1151527.10 00:11:11.933 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:11.933 Verification LBA range: start 0x2000 length 0x2000 00:11:11.933 Nvme3n1 : 6.02 135.87 8.49 0.00 0.00 776757.84 2427.81 2043769.95 00:11:11.933 =================================================================================================================== 00:11:11.933 Total : 1576.40 98.52 0.00 0.00 1010744.34 2427.81 2043769.95 00:11:13.305 00:11:13.305 real 0m9.374s 00:11:13.305 user 0m17.140s 00:11:13.305 sys 0m0.378s 00:11:13.305 18:18:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.305 18:18:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.305 ************************************ 00:11:13.305 END TEST bdev_verify_big_io 00:11:13.305 ************************************ 00:11:13.305 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:13.305 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:13.305 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:13.305 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.305 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.305 ************************************ 00:11:13.305 START TEST bdev_write_zeroes 00:11:13.305 ************************************ 00:11:13.305 18:18:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:13.565 [2024-07-22 18:18:25.367377] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:13.565 [2024-07-22 18:18:25.367578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68798 ] 00:11:13.565 [2024-07-22 18:18:25.533106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.824 [2024-07-22 18:18:25.783700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.757 Running I/O for 1 seconds... 00:11:15.720 00:11:15.720 Latency(us) 00:11:15.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.720 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:15.720 Nvme0n1 : 1.02 7160.18 27.97 0.00 0.00 17788.17 14358.34 25499.46 00:11:15.720 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:15.720 Nvme1n1p1 : 1.02 7147.17 27.92 0.00 0.00 17783.40 14239.19 26810.18 00:11:15.720 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:15.720 Nvme1n1p2 : 1.03 7177.12 28.04 0.00 0.00 17707.59 12153.95 25022.84 00:11:15.720 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:15.720 Nvme2n1 : 1.03 7165.51 27.99 0.00 0.00 17661.29 11677.32 21924.77 00:11:15.720 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:15.720 Nvme2n2 : 1.03 7154.08 27.95 0.00 0.00 17658.52 11558.17 21686.46 00:11:15.720 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:15.720 Nvme2n3 : 1.03 7142.66 27.90 0.00 0.00 17616.94 8162.21 20614.05 00:11:15.720 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:15.720 Nvme3n1 : 1.03 7069.57 27.62 0.00 0.00 17766.14 13524.25 28835.84 00:11:15.720 =================================================================================================================== 00:11:15.720 Total : 50016.30 195.38 0.00 0.00 17711.47 8162.21 28835.84 00:11:17.120 00:11:17.120 real 0m3.531s 00:11:17.120 user 0m3.128s 00:11:17.120 sys 0m0.283s 00:11:17.120 18:18:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.120 18:18:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:17.120 ************************************ 00:11:17.120 END TEST bdev_write_zeroes 00:11:17.120 ************************************ 00:11:17.120 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:17.120 18:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:17.120 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:17.120 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.120 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.120 ************************************ 00:11:17.120 START TEST bdev_json_nonenclosed 00:11:17.120 ************************************ 00:11:17.120 18:18:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:17.121 [2024-07-22 18:18:28.977861] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:17.121 [2024-07-22 18:18:28.978072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68851 ] 00:11:17.379 [2024-07-22 18:18:29.150617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.379 [2024-07-22 18:18:29.379482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.379 [2024-07-22 18:18:29.379642] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:17.379 [2024-07-22 18:18:29.379675] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:17.379 [2024-07-22 18:18:29.379711] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:17.947 00:11:17.947 real 0m0.940s 00:11:17.947 user 0m0.669s 00:11:17.947 sys 0m0.164s 00:11:17.947 18:18:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:11:17.947 18:18:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.947 18:18:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:17.947 ************************************ 00:11:17.947 END TEST bdev_json_nonenclosed 00:11:17.947 ************************************ 00:11:17.947 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:11:17.947 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # true 00:11:17.947 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:17.947 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:17.947 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.947 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.947 ************************************ 00:11:17.947 START TEST bdev_json_nonarray 00:11:17.947 ************************************ 00:11:17.947 18:18:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:17.947 [2024-07-22 18:18:29.954933] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:17.947 [2024-07-22 18:18:29.955177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68882 ] 00:11:18.206 [2024-07-22 18:18:30.120427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.464 [2024-07-22 18:18:30.363545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.464 [2024-07-22 18:18:30.363698] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:18.464 [2024-07-22 18:18:30.363747] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:18.464 [2024-07-22 18:18:30.363773] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:19.031 00:11:19.031 real 0m0.936s 00:11:19.031 user 0m0.685s 00:11:19.031 sys 0m0.144s 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:19.031 ************************************ 00:11:19.031 END TEST bdev_json_nonarray 00:11:19.031 ************************************ 00:11:19.031 18:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:11:19.031 18:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # true 00:11:19.031 18:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:11:19.031 18:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:11:19.031 18:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:19.031 18:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:19.031 18:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.031 18:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.031 ************************************ 00:11:19.031 START TEST bdev_gpt_uuid 00:11:19.031 ************************************ 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68913 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 68913 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 68913 ']' 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.031 18:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:19.031 [2024-07-22 18:18:30.988278] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:19.031 [2024-07-22 18:18:30.988487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68913 ] 00:11:19.289 [2024-07-22 18:18:31.161866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.558 [2024-07-22 18:18:31.402096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.493 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.493 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:11:20.493 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:20.494 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.494 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:20.753 Some configs were skipped because the RPC state that can call them passed over. 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:11:20.753 { 00:11:20.753 "name": "Nvme1n1p1", 00:11:20.753 "aliases": [ 00:11:20.753 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:20.753 ], 00:11:20.753 "product_name": "GPT Disk", 00:11:20.753 "block_size": 4096, 00:11:20.753 "num_blocks": 655104, 00:11:20.753 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:20.753 "assigned_rate_limits": { 00:11:20.753 "rw_ios_per_sec": 0, 00:11:20.753 "rw_mbytes_per_sec": 0, 00:11:20.753 "r_mbytes_per_sec": 0, 00:11:20.753 "w_mbytes_per_sec": 0 00:11:20.753 }, 00:11:20.753 "claimed": false, 00:11:20.753 "zoned": false, 00:11:20.753 "supported_io_types": { 00:11:20.753 "read": true, 00:11:20.753 "write": true, 00:11:20.753 "unmap": true, 00:11:20.753 "flush": true, 00:11:20.753 "reset": true, 00:11:20.753 "nvme_admin": false, 00:11:20.753 "nvme_io": false, 00:11:20.753 "nvme_io_md": false, 00:11:20.753 "write_zeroes": true, 00:11:20.753 "zcopy": false, 00:11:20.753 "get_zone_info": false, 00:11:20.753 "zone_management": false, 00:11:20.753 "zone_append": false, 00:11:20.753 "compare": true, 00:11:20.753 "compare_and_write": false, 00:11:20.753 "abort": true, 00:11:20.753 "seek_hole": false, 00:11:20.753 "seek_data": false, 00:11:20.753 "copy": true, 00:11:20.753 "nvme_iov_md": false 00:11:20.753 }, 00:11:20.753 "driver_specific": { 00:11:20.753 "gpt": { 00:11:20.753 "base_bdev": "Nvme1n1", 00:11:20.753 "offset_blocks": 256, 00:11:20.753 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:20.753 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:20.753 "partition_name": "SPDK_TEST_first" 00:11:20.753 } 00:11:20.753 } 00:11:20.753 } 00:11:20.753 ]' 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:20.753 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:11:21.012 { 00:11:21.012 "name": "Nvme1n1p2", 00:11:21.012 "aliases": [ 00:11:21.012 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:21.012 ], 00:11:21.012 "product_name": "GPT Disk", 00:11:21.012 "block_size": 4096, 00:11:21.012 "num_blocks": 655103, 00:11:21.012 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:21.012 "assigned_rate_limits": { 00:11:21.012 "rw_ios_per_sec": 0, 00:11:21.012 "rw_mbytes_per_sec": 0, 00:11:21.012 "r_mbytes_per_sec": 0, 00:11:21.012 "w_mbytes_per_sec": 0 00:11:21.012 }, 00:11:21.012 "claimed": false, 00:11:21.012 "zoned": false, 00:11:21.012 "supported_io_types": { 00:11:21.012 "read": true, 00:11:21.012 "write": true, 00:11:21.012 "unmap": true, 00:11:21.012 "flush": true, 00:11:21.012 "reset": true, 00:11:21.012 "nvme_admin": false, 00:11:21.012 "nvme_io": false, 00:11:21.012 "nvme_io_md": false, 00:11:21.012 "write_zeroes": true, 00:11:21.012 "zcopy": false, 00:11:21.012 "get_zone_info": false, 00:11:21.012 "zone_management": false, 00:11:21.012 "zone_append": false, 00:11:21.012 "compare": true, 00:11:21.012 "compare_and_write": false, 00:11:21.012 "abort": true, 00:11:21.012 "seek_hole": false, 00:11:21.012 "seek_data": false, 00:11:21.012 "copy": true, 00:11:21.012 "nvme_iov_md": false 00:11:21.012 }, 00:11:21.012 "driver_specific": { 00:11:21.012 "gpt": { 00:11:21.012 "base_bdev": "Nvme1n1", 00:11:21.012 "offset_blocks": 655360, 00:11:21.012 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:21.012 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:21.012 "partition_name": "SPDK_TEST_second" 00:11:21.012 } 00:11:21.012 } 00:11:21.012 } 00:11:21.012 ]' 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 68913 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 68913 ']' 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 68913 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68913 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:21.012 killing process with pid 68913 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68913' 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 68913 00:11:21.012 18:18:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 68913 00:11:23.542 00:11:23.542 real 0m4.350s 00:11:23.542 user 0m4.576s 00:11:23.542 sys 0m0.552s 00:11:23.542 18:18:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.542 18:18:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:23.542 ************************************ 00:11:23.542 END TEST bdev_gpt_uuid 00:11:23.542 ************************************ 00:11:23.542 18:18:35 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:23.542 18:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:23.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:23.801 Waiting for block devices as requested 00:11:24.059 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.059 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.059 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.317 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:29.583 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:29.583 18:18:41 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:29.583 18:18:41 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:29.583 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:29.583 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:29.583 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:29.583 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:29.583 18:18:41 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:29.583 00:11:29.583 real 1m7.175s 00:11:29.583 user 1m25.350s 00:11:29.583 sys 0m10.548s 00:11:29.583 18:18:41 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.583 18:18:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:29.583 ************************************ 00:11:29.583 END TEST blockdev_nvme_gpt 00:11:29.583 ************************************ 00:11:29.583 18:18:41 -- common/autotest_common.sh@1142 -- # return 0 00:11:29.583 18:18:41 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:29.583 18:18:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:29.583 18:18:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.583 18:18:41 -- common/autotest_common.sh@10 -- # set +x 00:11:29.583 ************************************ 00:11:29.583 START TEST nvme 00:11:29.583 ************************************ 00:11:29.583 18:18:41 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:29.583 * Looking for test storage... 00:11:29.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:29.583 18:18:41 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:30.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:30.919 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.919 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.919 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.919 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.919 18:18:42 nvme -- nvme/nvme.sh@79 -- # uname 00:11:30.919 18:18:42 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:30.919 18:18:42 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:30.919 18:18:42 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1069 -- # stubpid=69571 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:30.919 Waiting for stub to ready for secondary processes... 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69571 ]] 00:11:30.919 18:18:42 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:11:30.919 [2024-07-22 18:18:42.900966] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:30.919 [2024-07-22 18:18:42.901172] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:31.855 18:18:43 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:31.855 18:18:43 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69571 ]] 00:11:31.855 18:18:43 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:11:32.423 [2024-07-22 18:18:44.165969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.423 [2024-07-22 18:18:44.422244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.423 [2024-07-22 18:18:44.422376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.423 [2024-07-22 18:18:44.422394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.681 [2024-07-22 18:18:44.441485] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:32.681 [2024-07-22 18:18:44.441535] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:32.681 [2024-07-22 18:18:44.454213] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:32.681 [2024-07-22 18:18:44.454351] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:32.681 [2024-07-22 18:18:44.457208] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:32.681 [2024-07-22 18:18:44.457437] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:32.681 [2024-07-22 18:18:44.457528] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:32.681 [2024-07-22 18:18:44.460582] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:32.681 [2024-07-22 18:18:44.460779] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:32.682 [2024-07-22 18:18:44.460857] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:32.682 [2024-07-22 18:18:44.463585] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:32.682 [2024-07-22 18:18:44.463798] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:32.682 [2024-07-22 18:18:44.463881] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:32.682 [2024-07-22 18:18:44.463937] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:32.682 [2024-07-22 18:18:44.463999] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:32.941 18:18:44 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:32.941 done. 00:11:32.941 18:18:44 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:11:32.941 18:18:44 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:32.941 18:18:44 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:32.941 18:18:44 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.941 18:18:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:32.941 ************************************ 00:11:32.941 START TEST nvme_reset 00:11:32.941 ************************************ 00:11:32.941 18:18:44 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:33.200 Initializing NVMe Controllers 00:11:33.200 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:33.200 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:33.200 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:33.200 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:33.200 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:33.200 00:11:33.200 real 0m0.302s 00:11:33.200 user 0m0.109s 00:11:33.200 sys 0m0.153s 00:11:33.200 18:18:45 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.200 18:18:45 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:33.200 ************************************ 00:11:33.200 END TEST nvme_reset 00:11:33.200 ************************************ 00:11:33.200 18:18:45 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:33.200 18:18:45 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:33.200 18:18:45 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.200 18:18:45 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.200 18:18:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 ************************************ 00:11:33.459 START TEST nvme_identify 00:11:33.459 ************************************ 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:11:33.459 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:33.459 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:33.459 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:33.459 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:33.459 18:18:45 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:33.459 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:33.721 [2024-07-22 18:18:45.521736] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69605 terminated unexpected 00:11:33.721 ===================================================== 00:11:33.721 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:33.721 ===================================================== 00:11:33.721 Controller Capabilities/Features 00:11:33.721 ================================ 00:11:33.721 Vendor ID: 1b36 00:11:33.721 Subsystem Vendor ID: 1af4 00:11:33.721 Serial Number: 12341 00:11:33.721 Model Number: QEMU NVMe Ctrl 00:11:33.721 Firmware Version: 8.0.0 00:11:33.721 Recommended Arb Burst: 6 00:11:33.721 IEEE OUI Identifier: 00 54 52 00:11:33.721 Multi-path I/O 00:11:33.721 May have multiple subsystem ports: No 00:11:33.721 May have multiple controllers: No 00:11:33.721 Associated with SR-IOV VF: No 00:11:33.721 Max Data Transfer Size: 524288 00:11:33.721 Max Number of Namespaces: 256 00:11:33.721 Max Number of I/O Queues: 64 00:11:33.721 NVMe Specification Version (VS): 1.4 00:11:33.721 NVMe Specification Version (Identify): 1.4 00:11:33.721 Maximum Queue Entries: 2048 00:11:33.721 Contiguous Queues Required: Yes 00:11:33.721 Arbitration Mechanisms Supported 00:11:33.721 Weighted Round Robin: Not Supported 00:11:33.721 Vendor Specific: Not Supported 00:11:33.721 Reset Timeout: 7500 ms 00:11:33.721 Doorbell Stride: 4 bytes 00:11:33.721 NVM Subsystem Reset: Not Supported 00:11:33.721 Command Sets Supported 00:11:33.721 NVM Command Set: Supported 00:11:33.721 Boot Partition: Not Supported 00:11:33.721 Memory Page Size Minimum: 4096 bytes 00:11:33.721 Memory Page Size Maximum: 65536 bytes 00:11:33.721 Persistent Memory Region: Not Supported 00:11:33.721 Optional Asynchronous Events Supported 00:11:33.721 Namespace Attribute Notices: Supported 00:11:33.721 Firmware Activation Notices: Not Supported 00:11:33.721 ANA Change Notices: Not Supported 00:11:33.721 PLE Aggregate Log Change Notices: Not Supported 00:11:33.721 LBA Status Info Alert Notices: Not Supported 00:11:33.721 EGE Aggregate Log Change Notices: Not Supported 00:11:33.721 Normal NVM Subsystem Shutdown event: Not Supported 00:11:33.721 Zone Descriptor Change Notices: Not Supported 00:11:33.721 Discovery Log Change Notices: Not Supported 00:11:33.721 Controller Attributes 00:11:33.721 128-bit Host Identifier: Not Supported 00:11:33.721 Non-Operational Permissive Mode: Not Supported 00:11:33.721 NVM Sets: Not Supported 00:11:33.721 Read Recovery Levels: Not Supported 00:11:33.721 Endurance Groups: Not Supported 00:11:33.721 Predictable Latency Mode: Not Supported 00:11:33.721 Traffic Based Keep ALive: Not Supported 00:11:33.721 Namespace Granularity: Not Supported 00:11:33.721 SQ Associations: Not Supported 00:11:33.721 UUID List: Not Supported 00:11:33.721 Multi-Domain Subsystem: Not Supported 00:11:33.721 Fixed Capacity Management: Not Supported 00:11:33.721 Variable Capacity Management: Not Supported 00:11:33.721 Delete Endurance Group: Not Supported 00:11:33.721 Delete NVM Set: Not Supported 00:11:33.721 Extended LBA Formats Supported: Supported 00:11:33.721 Flexible Data Placement Supported: Not Supported 00:11:33.721 00:11:33.721 Controller Memory Buffer Support 00:11:33.721 ================================ 00:11:33.721 Supported: No 00:11:33.721 00:11:33.721 Persistent Memory Region Support 00:11:33.721 ================================ 00:11:33.721 Supported: No 00:11:33.721 00:11:33.721 Admin Command Set Attributes 00:11:33.721 ============================ 00:11:33.721 Security Send/Receive: Not Supported 00:11:33.721 Format NVM: Supported 00:11:33.721 Firmware Activate/Download: Not Supported 00:11:33.721 Namespace Management: Supported 00:11:33.721 Device Self-Test: Not Supported 00:11:33.721 Directives: Supported 00:11:33.721 NVMe-MI: Not Supported 00:11:33.721 Virtualization Management: Not Supported 00:11:33.722 Doorbell Buffer Config: Supported 00:11:33.722 Get LBA Status Capability: Not Supported 00:11:33.722 Command & Feature Lockdown Capability: Not Supported 00:11:33.722 Abort Command Limit: 4 00:11:33.722 Async Event Request Limit: 4 00:11:33.722 Number of Firmware Slots: N/A 00:11:33.722 Firmware Slot 1 Read-Only: N/A 00:11:33.722 Firmware Activation Without Reset: N/A 00:11:33.722 Multiple Update Detection Support: N/A 00:11:33.722 Firmware Update Granularity: No Information Provided 00:11:33.722 Per-Namespace SMART Log: Yes 00:11:33.722 Asymmetric Namespace Access Log Page: Not Supported 00:11:33.722 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:33.722 Command Effects Log Page: Supported 00:11:33.722 Get Log Page Extended Data: Supported 00:11:33.722 Telemetry Log Pages: Not Supported 00:11:33.722 Persistent Event Log Pages: Not Supported 00:11:33.722 Supported Log Pages Log Page: May Support 00:11:33.722 Commands Supported & Effects Log Page: Not Supported 00:11:33.722 Feature Identifiers & Effects Log Page:May Support 00:11:33.722 NVMe-MI Commands & Effects Log Page: May Support 00:11:33.722 Data Area 4 for Telemetry Log: Not Supported 00:11:33.722 Error Log Page Entries Supported: 1 00:11:33.722 Keep Alive: Not Supported 00:11:33.722 00:11:33.722 NVM Command Set Attributes 00:11:33.722 ========================== 00:11:33.722 Submission Queue Entry Size 00:11:33.722 Max: 64 00:11:33.722 Min: 64 00:11:33.722 Completion Queue Entry Size 00:11:33.722 Max: 16 00:11:33.722 Min: 16 00:11:33.722 Number of Namespaces: 256 00:11:33.722 Compare Command: Supported 00:11:33.722 Write Uncorrectable Command: Not Supported 00:11:33.722 Dataset Management Command: Supported 00:11:33.722 Write Zeroes Command: Supported 00:11:33.722 Set Features Save Field: Supported 00:11:33.722 Reservations: Not Supported 00:11:33.722 Timestamp: Supported 00:11:33.722 Copy: Supported 00:11:33.722 Volatile Write Cache: Present 00:11:33.722 Atomic Write Unit (Normal): 1 00:11:33.722 Atomic Write Unit (PFail): 1 00:11:33.722 Atomic Compare & Write Unit: 1 00:11:33.722 Fused Compare & Write: Not Supported 00:11:33.722 Scatter-Gather List 00:11:33.722 SGL Command Set: Supported 00:11:33.722 SGL Keyed: Not Supported 00:11:33.722 SGL Bit Bucket Descriptor: Not Supported 00:11:33.722 SGL Metadata Pointer: Not Supported 00:11:33.722 Oversized SGL: Not Supported 00:11:33.722 SGL Metadata Address: Not Supported 00:11:33.722 SGL Offset: Not Supported 00:11:33.722 Transport SGL Data Block: Not Supported 00:11:33.722 Replay Protected Memory Block: Not Supported 00:11:33.722 00:11:33.722 Firmware Slot Information 00:11:33.722 ========================= 00:11:33.722 Active slot: 1 00:11:33.722 Slot 1 Firmware Revision: 1.0 00:11:33.722 00:11:33.722 00:11:33.722 Commands Supported and Effects 00:11:33.722 ============================== 00:11:33.722 Admin Commands 00:11:33.722 -------------- 00:11:33.722 Delete I/O Submission Queue (00h): Supported 00:11:33.722 Create I/O Submission Queue (01h): Supported 00:11:33.722 Get Log Page (02h): Supported 00:11:33.722 Delete I/O Completion Queue (04h): Supported 00:11:33.722 Create I/O Completion Queue (05h): Supported 00:11:33.722 Identify (06h): Supported 00:11:33.722 Abort (08h): Supported 00:11:33.722 Set Features (09h): Supported 00:11:33.722 Get Features (0Ah): Supported 00:11:33.722 Asynchronous Event Request (0Ch): Supported 00:11:33.722 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:33.722 Directive Send (19h): Supported 00:11:33.722 Directive Receive (1Ah): Supported 00:11:33.722 Virtualization Management (1Ch): Supported 00:11:33.722 Doorbell Buffer Config (7Ch): Supported 00:11:33.722 Format NVM (80h): Supported LBA-Change 00:11:33.722 I/O Commands 00:11:33.722 ------------ 00:11:33.722 Flush (00h): Supported LBA-Change 00:11:33.722 Write (01h): Supported LBA-Change 00:11:33.722 Read (02h): Supported 00:11:33.722 Compare (05h): Supported 00:11:33.722 Write Zeroes (08h): Supported LBA-Change 00:11:33.722 Dataset Management (09h): Supported LBA-Change 00:11:33.722 Unknown (0Ch): Supported 00:11:33.722 Unknown (12h): Supported 00:11:33.722 Copy (19h): Supported LBA-Change 00:11:33.722 Unknown (1Dh): Supported LBA-Change 00:11:33.722 00:11:33.722 Error Log 00:11:33.722 ========= 00:11:33.722 00:11:33.722 Arbitration 00:11:33.722 =========== 00:11:33.722 Arbitration Burst: no limit 00:11:33.722 00:11:33.722 Power Management 00:11:33.722 ================ 00:11:33.722 Number of Power States: 1 00:11:33.722 Current Power State: Power State #0 00:11:33.722 Power State #0: 00:11:33.722 Max Power: 25.00 W 00:11:33.722 Non-Operational State: Operational 00:11:33.722 Entry Latency: 16 microseconds 00:11:33.722 Exit Latency: 4 microseconds 00:11:33.722 Relative Read Throughput: 0 00:11:33.722 Relative Read Latency: 0 00:11:33.722 Relative Write Throughput: 0 00:11:33.722 Relative Write Latency: 0 00:11:33.722 Idle Power[2024-07-22 18:18:45.523185] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69605 terminated unexpected 00:11:33.722 : Not Reported 00:11:33.722 Active Power: Not Reported 00:11:33.722 Non-Operational Permissive Mode: Not Supported 00:11:33.722 00:11:33.722 Health Information 00:11:33.722 ================== 00:11:33.722 Critical Warnings: 00:11:33.722 Available Spare Space: OK 00:11:33.722 Temperature: OK 00:11:33.722 Device Reliability: OK 00:11:33.722 Read Only: No 00:11:33.722 Volatile Memory Backup: OK 00:11:33.722 Current Temperature: 323 Kelvin (50 Celsius) 00:11:33.722 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:33.722 Available Spare: 0% 00:11:33.722 Available Spare Threshold: 0% 00:11:33.722 Life Percentage Used: 0% 00:11:33.722 Data Units Read: 1051 00:11:33.722 Data Units Written: 841 00:11:33.722 Host Read Commands: 47970 00:11:33.722 Host Write Commands: 45092 00:11:33.722 Controller Busy Time: 0 minutes 00:11:33.722 Power Cycles: 0 00:11:33.722 Power On Hours: 0 hours 00:11:33.722 Unsafe Shutdowns: 0 00:11:33.722 Unrecoverable Media Errors: 0 00:11:33.722 Lifetime Error Log Entries: 0 00:11:33.722 Warning Temperature Time: 0 minutes 00:11:33.722 Critical Temperature Time: 0 minutes 00:11:33.722 00:11:33.722 Number of Queues 00:11:33.722 ================ 00:11:33.722 Number of I/O Submission Queues: 64 00:11:33.722 Number of I/O Completion Queues: 64 00:11:33.722 00:11:33.722 ZNS Specific Controller Data 00:11:33.722 ============================ 00:11:33.722 Zone Append Size Limit: 0 00:11:33.722 00:11:33.722 00:11:33.722 Active Namespaces 00:11:33.722 ================= 00:11:33.722 Namespace ID:1 00:11:33.722 Error Recovery Timeout: Unlimited 00:11:33.722 Command Set Identifier: NVM (00h) 00:11:33.722 Deallocate: Supported 00:11:33.722 Deallocated/Unwritten Error: Supported 00:11:33.722 Deallocated Read Value: All 0x00 00:11:33.722 Deallocate in Write Zeroes: Not Supported 00:11:33.722 Deallocated Guard Field: 0xFFFF 00:11:33.722 Flush: Supported 00:11:33.722 Reservation: Not Supported 00:11:33.722 Namespace Sharing Capabilities: Private 00:11:33.722 Size (in LBAs): 1310720 (5GiB) 00:11:33.722 Capacity (in LBAs): 1310720 (5GiB) 00:11:33.722 Utilization (in LBAs): 1310720 (5GiB) 00:11:33.722 Thin Provisioning: Not Supported 00:11:33.722 Per-NS Atomic Units: No 00:11:33.722 Maximum Single Source Range Length: 128 00:11:33.722 Maximum Copy Length: 128 00:11:33.722 Maximum Source Range Count: 128 00:11:33.722 NGUID/EUI64 Never Reused: No 00:11:33.722 Namespace Write Protected: No 00:11:33.722 Number of LBA Formats: 8 00:11:33.722 Current LBA Format: LBA Format #04 00:11:33.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:33.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:33.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:33.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:33.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:33.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:33.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:33.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:33.722 00:11:33.722 NVM Specific Namespace Data 00:11:33.722 =========================== 00:11:33.722 Logical Block Storage Tag Mask: 0 00:11:33.722 Protection Information Capabilities: 00:11:33.722 16b Guard Protection Information Storage Tag Support: No 00:11:33.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:33.722 Storage Tag Check Read Support: No 00:11:33.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.723 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.723 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.723 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.723 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.723 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.723 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.723 ===================================================== 00:11:33.723 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:33.723 ===================================================== 00:11:33.723 Controller Capabilities/Features 00:11:33.723 ================================ 00:11:33.723 Vendor ID: 1b36 00:11:33.723 Subsystem Vendor ID: 1af4 00:11:33.723 Serial Number: 12343 00:11:33.723 Model Number: QEMU NVMe Ctrl 00:11:33.723 Firmware Version: 8.0.0 00:11:33.723 Recommended Arb Burst: 6 00:11:33.723 IEEE OUI Identifier: 00 54 52 00:11:33.723 Multi-path I/O 00:11:33.723 May have multiple subsystem ports: No 00:11:33.723 May have multiple controllers: Yes 00:11:33.723 Associated with SR-IOV VF: No 00:11:33.723 Max Data Transfer Size: 524288 00:11:33.723 Max Number of Namespaces: 256 00:11:33.723 Max Number of I/O Queues: 64 00:11:33.723 NVMe Specification Version (VS): 1.4 00:11:33.723 NVMe Specification Version (Identify): 1.4 00:11:33.723 Maximum Queue Entries: 2048 00:11:33.723 Contiguous Queues Required: Yes 00:11:33.723 Arbitration Mechanisms Supported 00:11:33.723 Weighted Round Robin: Not Supported 00:11:33.723 Vendor Specific: Not Supported 00:11:33.723 Reset Timeout: 7500 ms 00:11:33.723 Doorbell Stride: 4 bytes 00:11:33.723 NVM Subsystem Reset: Not Supported 00:11:33.723 Command Sets Supported 00:11:33.723 NVM Command Set: Supported 00:11:33.723 Boot Partition: Not Supported 00:11:33.723 Memory Page Size Minimum: 4096 bytes 00:11:33.723 Memory Page Size Maximum: 65536 bytes 00:11:33.723 Persistent Memory Region: Not Supported 00:11:33.723 Optional Asynchronous Events Supported 00:11:33.723 Namespace Attribute Notices: Supported 00:11:33.723 Firmware Activation Notices: Not Supported 00:11:33.723 ANA Change Notices: Not Supported 00:11:33.723 PLE Aggregate Log Change Notices: Not Supported 00:11:33.723 LBA Status Info Alert Notices: Not Supported 00:11:33.723 EGE Aggregate Log Change Notices: Not Supported 00:11:33.723 Normal NVM Subsystem Shutdown event: Not Supported 00:11:33.723 Zone Descriptor Change Notices: Not Supported 00:11:33.723 Discovery Log Change Notices: Not Supported 00:11:33.723 Controller Attributes 00:11:33.723 128-bit Host Identifier: Not Supported 00:11:33.723 Non-Operational Permissive Mode: Not Supported 00:11:33.723 NVM Sets: Not Supported 00:11:33.723 Read Recovery Levels: Not Supported 00:11:33.723 Endurance Groups: Supported 00:11:33.723 Predictable Latency Mode: Not Supported 00:11:33.723 Traffic Based Keep ALive: Not Supported 00:11:33.723 Namespace Granularity: Not Supported 00:11:33.723 SQ Associations: Not Supported 00:11:33.723 UUID List: Not Supported 00:11:33.723 Multi-Domain Subsystem: Not Supported 00:11:33.723 Fixed Capacity Management: Not Supported 00:11:33.723 Variable Capacity Management: Not Supported 00:11:33.723 Delete Endurance Group: Not Supported 00:11:33.723 Delete NVM Set: Not Supported 00:11:33.723 Extended LBA Formats Supported: Supported 00:11:33.723 Flexible Data Placement Supported: Supported 00:11:33.723 00:11:33.723 Controller Memory Buffer Support 00:11:33.723 ================================ 00:11:33.723 Supported: No 00:11:33.723 00:11:33.723 Persistent Memory Region Support 00:11:33.723 ================================ 00:11:33.723 Supported: No 00:11:33.723 00:11:33.723 Admin Command Set Attributes 00:11:33.723 ============================ 00:11:33.723 Security Send/Receive: Not Supported 00:11:33.723 Format NVM: Supported 00:11:33.723 Firmware Activate/Download: Not Supported 00:11:33.723 Namespace Management: Supported 00:11:33.723 Device Self-Test: Not Supported 00:11:33.723 Directives: Supported 00:11:33.723 NVMe-MI: Not Supported 00:11:33.723 Virtualization Management: Not Supported 00:11:33.723 Doorbell Buffer Config: Supported 00:11:33.723 Get LBA Status Capability: Not Supported 00:11:33.723 Command & Feature Lockdown Capability: Not Supported 00:11:33.723 Abort Command Limit: 4 00:11:33.723 Async Event Request Limit: 4 00:11:33.723 Number of Firmware Slots: N/A 00:11:33.723 Firmware Slot 1 Read-Only: N/A 00:11:33.723 Firmware Activation Without Reset: N/A 00:11:33.723 Multiple Update Detection Support: N/A 00:11:33.723 Firmware Update Granularity: No Information Provided 00:11:33.723 Per-Namespace SMART Log: Yes 00:11:33.723 Asymmetric Namespace Access Log Page: Not Supported 00:11:33.723 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:33.723 Command Effects Log Page: Supported 00:11:33.723 Get Log Page Extended Data: Supported 00:11:33.723 Telemetry Log Pages: Not Supported 00:11:33.723 Persistent Event Log Pages: Not Supported 00:11:33.723 Supported Log Pages Log Page: May Support 00:11:33.723 Commands Supported & Effects Log Page: Not Supported 00:11:33.723 Feature Identifiers & Effects Log Page:May Support 00:11:33.723 NVMe-MI Commands & Effects Log Page: May Support 00:11:33.723 Data Area 4 for Telemetry Log: Not Supported 00:11:33.723 Error Log Page Entries Supported: 1 00:11:33.723 Keep Alive: Not Supported 00:11:33.723 00:11:33.723 NVM Command Set Attributes 00:11:33.723 ========================== 00:11:33.723 Submission Queue Entry Size 00:11:33.723 Max: 64 00:11:33.723 Min: 64 00:11:33.723 Completion Queue Entry Size 00:11:33.723 Max: 16 00:11:33.723 Min: 16 00:11:33.723 Number of Namespaces: 256 00:11:33.723 Compare Command: Supported 00:11:33.723 Write Uncorrectable Command: Not Supported 00:11:33.723 Dataset Management Command: Supported 00:11:33.723 Write Zeroes Command: Supported 00:11:33.723 Set Features Save Field: Supported 00:11:33.723 Reservations: Not Supported 00:11:33.723 Timestamp: Supported 00:11:33.723 Copy: Supported 00:11:33.723 Volatile Write Cache: Present 00:11:33.723 Atomic Write Unit (Normal): 1 00:11:33.723 Atomic Write Unit (PFail): 1 00:11:33.723 Atomic Compare & Write Unit: 1 00:11:33.723 Fused Compare & Write: Not Supported 00:11:33.723 Scatter-Gather List 00:11:33.723 SGL Command Set: Supported 00:11:33.723 SGL Keyed: Not Supported 00:11:33.723 SGL Bit Bucket Descriptor: Not Supported 00:11:33.723 SGL Metadata Pointer: Not Supported 00:11:33.723 Oversized SGL: Not Supported 00:11:33.723 SGL Metadata Address: Not Supported 00:11:33.723 SGL Offset: Not Supported 00:11:33.723 Transport SGL Data Block: Not Supported 00:11:33.723 Replay Protected Memory Block: Not Supported 00:11:33.723 00:11:33.723 Firmware Slot Information 00:11:33.723 ========================= 00:11:33.723 Active slot: 1 00:11:33.723 Slot 1 Firmware Revision: 1.0 00:11:33.723 00:11:33.723 00:11:33.723 Commands Supported and Effects 00:11:33.723 ============================== 00:11:33.723 Admin Commands 00:11:33.723 -------------- 00:11:33.723 Delete I/O Submission Queue (00h): Supported 00:11:33.723 Create I/O Submission Queue (01h): Supported 00:11:33.723 Get Log Page (02h): Supported 00:11:33.723 Delete I/O Completion Queue (04h): Supported 00:11:33.723 Create I/O Completion Queue (05h): Supported 00:11:33.723 Identify (06h): Supported 00:11:33.723 Abort (08h): Supported 00:11:33.723 Set Features (09h): Supported 00:11:33.723 Get Features (0Ah): Supported 00:11:33.723 Asynchronous Event Request (0Ch): Supported 00:11:33.723 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:33.723 Directive Send (19h): Supported 00:11:33.723 Directive Receive (1Ah): Supported 00:11:33.723 Virtualization Management (1Ch): Supported 00:11:33.723 Doorbell Buffer Config (7Ch): Supported 00:11:33.723 Format NVM (80h): Supported LBA-Change 00:11:33.723 I/O Commands 00:11:33.723 ------------ 00:11:33.723 Flush (00h): Supported LBA-Change 00:11:33.723 Write (01h): Supported LBA-Change 00:11:33.723 Read (02h): Supported 00:11:33.723 Compare (05h): Supported 00:11:33.723 Write Zeroes (08h): Supported LBA-Change 00:11:33.723 Dataset Management (09h): Supported LBA-Change 00:11:33.723 Unknown (0Ch): Supported 00:11:33.723 Unknown (12h): Supported 00:11:33.723 Copy (19h): Supported LBA-Change 00:11:33.724 Unknown (1Dh): Supported LBA-Change 00:11:33.724 00:11:33.724 Error Log 00:11:33.724 ========= 00:11:33.724 00:11:33.724 Arbitration 00:11:33.724 =========== 00:11:33.724 Arbitration Burst: no limit 00:11:33.724 00:11:33.724 Power Management 00:11:33.724 ================ 00:11:33.724 Number of Power States: 1 00:11:33.724 Current Power State: Power State #0 00:11:33.724 Power State #0: 00:11:33.724 Max Power: 25.00 W 00:11:33.724 Non-Operational State: Operational 00:11:33.724 Entry Latency: 16 microseconds 00:11:33.724 Exit Latency: 4 microseconds 00:11:33.724 Relative Read Throughput: 0 00:11:33.724 Relative Read Latency: 0 00:11:33.724 Relative Write Throughput: 0 00:11:33.724 Relative Write Latency: 0 00:11:33.724 Idle Power: Not Reported 00:11:33.724 Active Power: Not Reported 00:11:33.724 Non-Operational Permissive Mode: Not Supported 00:11:33.724 00:11:33.724 Health Information 00:11:33.724 ================== 00:11:33.724 Critical Warnings: 00:11:33.724 Available Spare Space: OK 00:11:33.724 Temperature: OK 00:11:33.724 Device Reliability: OK 00:11:33.724 Read Only: No 00:11:33.724 Volatile Memory Backup: OK 00:11:33.724 Current Temperature: 323 Kelvin (50 Celsius) 00:11:33.724 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:33.724 Available Spare: 0% 00:11:33.724 Available Spare Threshold: 0% 00:11:33.724 Life Percentage Used: 0% 00:11:33.724 Data Units Read: 770 00:11:33.724 Data Units Written: 664 00:11:33.724 Host Read Commands: 33110 00:11:33.724 Host Write Commands: 31700 00:11:33.724 Controller Busy Time: 0 minutes 00:11:33.724 Power Cycles: 0 00:11:33.724 Power On Hours: 0 hours 00:11:33.724 Unsafe Shutdowns: 0 00:11:33.724 Unrecoverable Media Errors: 0 00:11:33.724 Lifetime Error Log Entries: 0 00:11:33.724 Warning Temperature Time: 0 minutes 00:11:33.724 Critical Temperature Time: 0 minutes 00:11:33.724 00:11:33.724 Number of Queues 00:11:33.724 ================ 00:11:33.724 Number of I/O Submission Queues: 64 00:11:33.724 Number of I/O Completion Queues: 64 00:11:33.724 00:11:33.724 ZNS Specific Controller Data 00:11:33.724 ============================ 00:11:33.724 Zone Append Size Limit: 0 00:11:33.724 00:11:33.724 00:11:33.724 Active Namespaces 00:11:33.724 ================= 00:11:33.724 Namespace ID:1 00:11:33.724 Error Recovery Timeout: Unlimited 00:11:33.724 Command Set Identifier: NVM (00h) 00:11:33.724 Deallocate: Supported 00:11:33.724 Deallocated/Unwritten Error: Supported 00:11:33.724 Deallocated Read Value: All 0x00 00:11:33.724 Deallocate in Write Zeroes: Not Supported 00:11:33.724 Deallocated Guard Field: 0xFFFF 00:11:33.724 Flush: Supported 00:11:33.724 Reservation: Not Supported 00:11:33.724 Namespace Sharing Capabilities: Multiple Controllers 00:11:33.724 Size (in LBAs): 262144 (1GiB) 00:11:33.724 Capacity (in LBAs): 262144 (1GiB) 00:11:33.724 Utilization (in LBAs): 262144 (1GiB) 00:11:33.724 Thin Provisioning: Not Supported 00:11:33.724 Per-NS Atomic Units: No 00:11:33.724 Maximum Single Source Range Length: 128 00:11:33.724 Maximum Copy Length: 128 00:11:33.724 Maximum Source Range Count: 128 00:11:33.724 NGUID/EUI64 Never Reused: No 00:11:33.724 Namespace Write Protected: No 00:11:33.724 Endurance group ID: 1 00:11:33.724 Number of LBA Formats: 8 00:11:33.724 Current LBA Format: LBA Format #04 00:11:33.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:33.724 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:33.724 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:33.724 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:33.724 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:33.724 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:33.724 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:33.724 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:33.724 00:11:33.724 Get Feature FDP: 00:11:33.724 ================ 00:11:33.724 Enabled: Yes 00:11:33.724 FDP configuration index: 0 00:11:33.724 00:11:33.724 FDP configurations log page 00:11:33.724 =========================== 00:11:33.724 Number of FDP configurations: 1 00:11:33.724 Version: 0 00:11:33.724 Size: 112 00:11:33.724 FDP Configuration Descriptor: 0 00:11:33.724 Descriptor Size: 96 00:11:33.724 Reclaim Group Identifier format: 2 00:11:33.724 FDP Volatile Write Cache: Not Present 00:11:33.724 FDP Configuration: Valid 00:11:33.724 Vendor Specific Size: 0 00:11:33.724 Number of Reclaim Groups: 2 00:11:33.724 Number of Recalim Unit Handles: 8 00:11:33.724 Max Placement Identifiers: 128 00:11:33.724 Number of Namespaces Suppprted: 256 00:11:33.724 Reclaim unit Nominal Size: 6000000 bytes 00:11:33.724 Estimated Reclaim Unit Time Limit: Not Reported 00:11:33.724 RUH Desc #000: RUH Type: Initially Isolated 00:11:33.724 RUH Desc #001: RUH Type: Initially Isolated 00:11:33.724 RUH Desc #002: RUH Type: Initially Isolated 00:11:33.724 RUH Desc #003: RUH Type: Initially Isolated 00:11:33.724 RUH Desc #004: RUH Type: Initially Isolated 00:11:33.724 RUH Desc #005: RUH Type: Initially Isolated 00:11:33.724 RUH Desc #006: RUH Type: Initially Isolated 00:11:33.724 RUH Desc #007: RUH Type: Initially Isolated 00:11:33.724 00:11:33.724 FDP reclaim unit handle usage log page 00:11:33.724 ====================================== 00:11:33.724 Number of Reclaim Unit Handles: 8 00:11:33.724 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:33.724 RUH Usage Desc #001: RUH Attributes: Unused 00:11:33.724 RUH Usage Desc #002: RUH Attributes: Unused 00:11:33.724 RUH Usage Desc #003: RUH Attributes: Unused 00:11:33.724 RUH Usage Desc #004: RUH Attributes: Unused 00:11:33.724 R[2024-07-22 18:18:45.525399] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69605 terminated unexpected 00:11:33.724 UH Usage Desc #005: RUH Attributes: Unused 00:11:33.724 RUH Usage Desc #006: RUH Attributes: Unused 00:11:33.724 RUH Usage Desc #007: RUH Attributes: Unused 00:11:33.724 00:11:33.724 FDP statistics log page 00:11:33.724 ======================= 00:11:33.724 Host bytes with metadata written: 419012608 00:11:33.724 Media bytes with metadata written: 419057664 00:11:33.724 Media bytes erased: 0 00:11:33.724 00:11:33.724 FDP events log page 00:11:33.724 =================== 00:11:33.724 Number of FDP events: 0 00:11:33.724 00:11:33.724 NVM Specific Namespace Data 00:11:33.724 =========================== 00:11:33.724 Logical Block Storage Tag Mask: 0 00:11:33.724 Protection Information Capabilities: 00:11:33.724 16b Guard Protection Information Storage Tag Support: No 00:11:33.724 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:33.724 Storage Tag Check Read Support: No 00:11:33.724 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.724 ===================================================== 00:11:33.724 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:33.724 ===================================================== 00:11:33.724 Controller Capabilities/Features 00:11:33.724 ================================ 00:11:33.724 Vendor ID: 1b36 00:11:33.724 Subsystem Vendor ID: 1af4 00:11:33.724 Serial Number: 12340 00:11:33.724 Model Number: QEMU NVMe Ctrl 00:11:33.724 Firmware Version: 8.0.0 00:11:33.724 Recommended Arb Burst: 6 00:11:33.724 IEEE OUI Identifier: 00 54 52 00:11:33.724 Multi-path I/O 00:11:33.724 May have multiple subsystem ports: No 00:11:33.724 May have multiple controllers: No 00:11:33.724 Associated with SR-IOV VF: No 00:11:33.724 Max Data Transfer Size: 524288 00:11:33.724 Max Number of Namespaces: 256 00:11:33.724 Max Number of I/O Queues: 64 00:11:33.724 NVMe Specification Version (VS): 1.4 00:11:33.724 NVMe Specification Version (Identify): 1.4 00:11:33.724 Maximum Queue Entries: 2048 00:11:33.724 Contiguous Queues Required: Yes 00:11:33.724 Arbitration Mechanisms Supported 00:11:33.724 Weighted Round Robin: Not Supported 00:11:33.724 Vendor Specific: Not Supported 00:11:33.724 Reset Timeout: 7500 ms 00:11:33.724 Doorbell Stride: 4 bytes 00:11:33.724 NVM Subsystem Reset: Not Supported 00:11:33.724 Command Sets Supported 00:11:33.724 NVM Command Set: Supported 00:11:33.724 Boot Partition: Not Supported 00:11:33.724 Memory Page Size Minimum: 4096 bytes 00:11:33.725 Memory Page Size Maximum: 65536 bytes 00:11:33.725 Persistent Memory Region: Not Supported 00:11:33.725 Optional Asynchronous Events Supported 00:11:33.725 Namespace Attribute Notices: Supported 00:11:33.725 Firmware Activation Notices: Not Supported 00:11:33.725 ANA Change Notices: Not Supported 00:11:33.725 PLE Aggregate Log Change Notices: Not Supported 00:11:33.725 LBA Status Info Alert Notices: Not Supported 00:11:33.725 EGE Aggregate Log Change Notices: Not Supported 00:11:33.725 Normal NVM Subsystem Shutdown event: Not Supported 00:11:33.725 Zone Descriptor Change Notices: Not Supported 00:11:33.725 Discovery Log Change Notices: Not Supported 00:11:33.725 Controller Attributes 00:11:33.725 128-bit Host Identifier: Not Supported 00:11:33.725 Non-Operational Permissive Mode: Not Supported 00:11:33.725 NVM Sets: Not Supported 00:11:33.725 Read Recovery Levels: Not Supported 00:11:33.725 Endurance Groups: Not Supported 00:11:33.725 Predictable Latency Mode: Not Supported 00:11:33.725 Traffic Based Keep ALive: Not Supported 00:11:33.725 Namespace Granularity: Not Supported 00:11:33.725 SQ Associations: Not Supported 00:11:33.725 UUID List: Not Supported 00:11:33.725 Multi-Domain Subsystem: Not Supported 00:11:33.725 Fixed Capacity Management: Not Supported 00:11:33.725 Variable Capacity Management: Not Supported 00:11:33.725 Delete Endurance Group: Not Supported 00:11:33.725 Delete NVM Set: Not Supported 00:11:33.725 Extended LBA Formats Supported: Supported 00:11:33.725 Flexible Data Placement Supported: Not Supported 00:11:33.725 00:11:33.725 Controller Memory Buffer Support 00:11:33.725 ================================ 00:11:33.725 Supported: No 00:11:33.725 00:11:33.725 Persistent Memory Region Support 00:11:33.725 ================================ 00:11:33.725 Supported: No 00:11:33.725 00:11:33.725 Admin Command Set Attributes 00:11:33.725 ============================ 00:11:33.725 Security Send/Receive: Not Supported 00:11:33.725 Format NVM: Supported 00:11:33.725 Firmware Activate/Download: Not Supported 00:11:33.725 Namespace Management: Supported 00:11:33.725 Device Self-Test: Not Supported 00:11:33.725 Directives: Supported 00:11:33.725 NVMe-MI: Not Supported 00:11:33.725 Virtualization Management: Not Supported 00:11:33.725 Doorbell Buffer Config: Supported 00:11:33.725 Get LBA Status Capability: Not Supported 00:11:33.725 Command & Feature Lockdown Capability: Not Supported 00:11:33.725 Abort Command Limit: 4 00:11:33.725 Async Event Request Limit: 4 00:11:33.725 Number of Firmware Slots: N/A 00:11:33.725 Firmware Slot 1 Read-Only: N/A 00:11:33.725 Firmware Activation Without Reset: N/A 00:11:33.725 Multiple Update Detection Support: N/A 00:11:33.725 Firmware Update Granularity: No Information Provided 00:11:33.725 Per-Namespace SMART Log: Yes 00:11:33.725 Asymmetric Namespace Access Log Page: Not Supported 00:11:33.725 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:33.725 Command Effects Log Page: Supported 00:11:33.725 Get Log Page Extended Data: Supported 00:11:33.725 Telemetry Log Pages: Not Supported 00:11:33.725 Persistent Event Log Pages: Not Supported 00:11:33.725 Supported Log Pages Log Page: May Support 00:11:33.725 Commands Supported & Effects Log Page: Not Supported 00:11:33.725 Feature Identifiers & Effects Log Page:May Support 00:11:33.725 NVMe-MI Commands & Effects Log Page: May Support 00:11:33.725 Data Area 4 for Telemetry Log: Not Supported 00:11:33.725 Error Log Page Entries Supported: 1 00:11:33.725 Keep Alive: Not Supported 00:11:33.725 00:11:33.725 NVM Command Set Attributes 00:11:33.725 ========================== 00:11:33.725 Submission Queue Entry Size 00:11:33.725 Max: 64 00:11:33.725 Min: 64 00:11:33.725 Completion Queue Entry Size 00:11:33.725 Max: 16 00:11:33.725 Min: 16 00:11:33.725 Number of Namespaces: 256 00:11:33.725 Compare Command: Supported 00:11:33.725 Write Uncorrectable Command: Not Supported 00:11:33.725 Dataset Management Command: Supported 00:11:33.725 Write Zeroes Command: Supported 00:11:33.725 Set Features Save Field: Supported 00:11:33.725 Reservations: Not Supported 00:11:33.725 Timestamp: Supported 00:11:33.725 Copy: Supported 00:11:33.725 Volatile Write Cache: Present 00:11:33.725 Atomic Write Unit (Normal): 1 00:11:33.725 Atomic Write Unit (PFail): 1 00:11:33.725 Atomic Compare & Write Unit: 1 00:11:33.725 Fused Compare & Write: Not Supported 00:11:33.725 Scatter-Gather List 00:11:33.725 SGL Command Set: Supported 00:11:33.725 SGL Keyed: Not Supported 00:11:33.725 SGL Bit Bucket Descriptor: Not Supported 00:11:33.725 SGL Metadata Pointer: Not Supported 00:11:33.725 Oversized SGL: Not Supported 00:11:33.725 SGL Metadata Address: Not Supported 00:11:33.725 SGL Offset: Not Supported 00:11:33.725 Transport SGL Data Block: Not Supported 00:11:33.725 Replay Protected Memory Block: Not Supported 00:11:33.725 00:11:33.725 Firmware Slot Information 00:11:33.725 ========================= 00:11:33.725 Active slot: 1 00:11:33.725 Slot 1 Firmware Revision: 1.0 00:11:33.725 00:11:33.725 00:11:33.725 Commands Supported and Effects 00:11:33.725 ============================== 00:11:33.725 Admin Commands 00:11:33.725 -------------- 00:11:33.725 Delete I/O Submission Queue (00h): Supported 00:11:33.725 Create I/O Submission Queue (01h): Supported 00:11:33.725 Get Log Page (02h): Supported 00:11:33.725 Delete I/O Completion Queue (04h): Supported 00:11:33.725 Create I/O Completion Queue (05h): Supported 00:11:33.725 Identify (06h): Supported 00:11:33.725 Abort (08h): Supported 00:11:33.725 Set Features (09h): Supported 00:11:33.725 Get Features (0Ah): Supported 00:11:33.725 Asynchronous Event Request (0Ch): Supported 00:11:33.725 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:33.725 Directive Send (19h): Supported 00:11:33.725 Directive Receive (1Ah): Supported 00:11:33.725 Virtualization Management (1Ch): Supported 00:11:33.725 Doorbell Buffer Config (7Ch): Supported 00:11:33.725 Format NVM (80h): Supported LBA-Change 00:11:33.725 I/O Commands 00:11:33.725 ------------ 00:11:33.725 Flush (00h): Supported LBA-Change 00:11:33.725 Write (01h): Supported LBA-Change 00:11:33.725 Read (02h): Supported 00:11:33.725 Compare (05h): Supported 00:11:33.725 Write Zeroes (08h): Supported LBA-Change 00:11:33.725 Dataset Management (09h): Supported LBA-Change 00:11:33.725 Unknown (0Ch): Supported 00:11:33.725 Unknown (12h): Supported 00:11:33.725 Copy (19h): Supported LBA-Change 00:11:33.725 Unknown (1Dh): Supported LBA-Change 00:11:33.725 00:11:33.725 Error Log 00:11:33.725 ========= 00:11:33.725 00:11:33.725 Arbitration 00:11:33.725 =========== 00:11:33.725 Arbitration Burst: no limit 00:11:33.725 00:11:33.725 Power Management 00:11:33.725 ================ 00:11:33.725 Number of Power States: 1 00:11:33.725 Current Power State: Power State #0 00:11:33.725 Power State #0: 00:11:33.725 Max Power: 25.00 W 00:11:33.725 Non-Operational State: Operational 00:11:33.725 Entry Latency: 16 microseconds 00:11:33.725 Exit Latency: 4 microseconds 00:11:33.725 Relative Read Throughput: 0 00:11:33.725 Relative Read Latency: 0 00:11:33.725 Relative Write Throughput: 0 00:11:33.725 Relative Write Latency: 0 00:11:33.725 Idle Power: Not Reported 00:11:33.725 Active Power: Not Reported 00:11:33.725 Non-Operational Permissive Mode: Not Supported 00:11:33.725 00:11:33.725 Health Information 00:11:33.725 ================== 00:11:33.725 Critical Warnings: 00:11:33.725 Available Spare Space: OK 00:11:33.725 Temperature: OK 00:11:33.725 Device Reliability: OK 00:11:33.725 Read Only: No 00:11:33.725 Volatile Memory Backup: OK 00:11:33.725 Current Temperature: 323 Kelvin (50 Celsius) 00:11:33.725 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:33.725 Available Spare: 0% 00:11:33.725 Available Spare Threshold: 0% 00:11:33.725 Life Percentage Used: 0% 00:11:33.725 Data Units Read: 677 00:11:33.725 Data Units Written: 568 00:11:33.725 Host Read Commands: 31990 00:11:33.725 Host Write Commands: 31028 00:11:33.725 Controller Busy Time: 0 minutes 00:11:33.725 Power Cycles: 0 00:11:33.725 Power On Hours: 0 hours 00:11:33.725 Unsafe Shutdowns: 0 00:11:33.725 Unrecoverable Media Errors: 0 00:11:33.725 Lifetime Error Log Entries: 0 00:11:33.725 Warning Temperature Time: 0 minutes 00:11:33.725 Critical Temperature Time: 0 minutes 00:11:33.725 00:11:33.725 Number of Queues 00:11:33.725 ================ 00:11:33.725 Number of I/O Submission Queues: 64 00:11:33.725 Number of I/O Completion Queues: 64 00:11:33.725 00:11:33.725 ZNS Specific Controller Data 00:11:33.725 ============================ 00:11:33.725 Zone Append Size Limit: 0 00:11:33.725 00:11:33.725 00:11:33.725 Active Namespaces 00:11:33.726 ================= 00:11:33.726 Namespace ID:1 00:11:33.726 Error Recovery Timeout: Unlimited 00:11:33.726 Command Set Identifier: NVM (00h) 00:11:33.726 Deallocate: Supported 00:11:33.726 Deallocated/Unwritten Error: Supported 00:11:33.726 Deallocated Read Value: All 0x00 00:11:33.726 Deallocate in Write Zeroes: Not Supported 00:11:33.726 Deallocated Guard Field: 0xFFFF 00:11:33.726 Flush: Supported 00:11:33.726 Reservation: Not Supported 00:11:33.726 Metadata Transferred as: Separate Metadata Buffer 00:11:33.726 Namespace Sharing Capabilities: Private 00:11:33.726 Size (in LBAs): 1548666 (5GiB) 00:11:33.726 Capacity (in LBAs): 1548666 (5GiB) 00:11:33.726 Utilization (in LBAs): 1548666 (5GiB) 00:11:33.726 Thin Provisioning: Not Supported 00:11:33.726 Per-NS Atomic Units: No 00:11:33.726 Maximum Single Source Range Length: 128 00:11:33.726 Maximum Copy Length: 128 00:11:33.726 Maximum Source Range Count: 128 00:11:33.726 NGUID/EUI64 Never Reused: No 00:11:33.726 Namespace Write Protected: No 00:11:33.726 Number of LBA Formats: 8 00:11:33.726 Current LBA Format: LBA Format #07 00:11:33.726 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:33.726 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:33.726 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:33.726 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:33.726 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:33.726 LBA Forma[2024-07-22 18:18:45.526431] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69605 terminated unexpected 00:11:33.726 t #05: Data Size: 4096 Metadata Size: 8 00:11:33.726 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:33.726 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:33.726 00:11:33.726 NVM Specific Namespace Data 00:11:33.726 =========================== 00:11:33.726 Logical Block Storage Tag Mask: 0 00:11:33.726 Protection Information Capabilities: 00:11:33.726 16b Guard Protection Information Storage Tag Support: No 00:11:33.726 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:33.726 Storage Tag Check Read Support: No 00:11:33.726 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.726 ===================================================== 00:11:33.726 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:33.726 ===================================================== 00:11:33.726 Controller Capabilities/Features 00:11:33.726 ================================ 00:11:33.726 Vendor ID: 1b36 00:11:33.726 Subsystem Vendor ID: 1af4 00:11:33.726 Serial Number: 12342 00:11:33.726 Model Number: QEMU NVMe Ctrl 00:11:33.726 Firmware Version: 8.0.0 00:11:33.726 Recommended Arb Burst: 6 00:11:33.726 IEEE OUI Identifier: 00 54 52 00:11:33.726 Multi-path I/O 00:11:33.726 May have multiple subsystem ports: No 00:11:33.726 May have multiple controllers: No 00:11:33.726 Associated with SR-IOV VF: No 00:11:33.726 Max Data Transfer Size: 524288 00:11:33.726 Max Number of Namespaces: 256 00:11:33.726 Max Number of I/O Queues: 64 00:11:33.726 NVMe Specification Version (VS): 1.4 00:11:33.726 NVMe Specification Version (Identify): 1.4 00:11:33.726 Maximum Queue Entries: 2048 00:11:33.726 Contiguous Queues Required: Yes 00:11:33.726 Arbitration Mechanisms Supported 00:11:33.726 Weighted Round Robin: Not Supported 00:11:33.726 Vendor Specific: Not Supported 00:11:33.726 Reset Timeout: 7500 ms 00:11:33.726 Doorbell Stride: 4 bytes 00:11:33.726 NVM Subsystem Reset: Not Supported 00:11:33.726 Command Sets Supported 00:11:33.726 NVM Command Set: Supported 00:11:33.726 Boot Partition: Not Supported 00:11:33.726 Memory Page Size Minimum: 4096 bytes 00:11:33.726 Memory Page Size Maximum: 65536 bytes 00:11:33.726 Persistent Memory Region: Not Supported 00:11:33.726 Optional Asynchronous Events Supported 00:11:33.726 Namespace Attribute Notices: Supported 00:11:33.726 Firmware Activation Notices: Not Supported 00:11:33.726 ANA Change Notices: Not Supported 00:11:33.726 PLE Aggregate Log Change Notices: Not Supported 00:11:33.726 LBA Status Info Alert Notices: Not Supported 00:11:33.726 EGE Aggregate Log Change Notices: Not Supported 00:11:33.726 Normal NVM Subsystem Shutdown event: Not Supported 00:11:33.726 Zone Descriptor Change Notices: Not Supported 00:11:33.726 Discovery Log Change Notices: Not Supported 00:11:33.726 Controller Attributes 00:11:33.726 128-bit Host Identifier: Not Supported 00:11:33.726 Non-Operational Permissive Mode: Not Supported 00:11:33.726 NVM Sets: Not Supported 00:11:33.726 Read Recovery Levels: Not Supported 00:11:33.726 Endurance Groups: Not Supported 00:11:33.726 Predictable Latency Mode: Not Supported 00:11:33.726 Traffic Based Keep ALive: Not Supported 00:11:33.726 Namespace Granularity: Not Supported 00:11:33.726 SQ Associations: Not Supported 00:11:33.726 UUID List: Not Supported 00:11:33.726 Multi-Domain Subsystem: Not Supported 00:11:33.726 Fixed Capacity Management: Not Supported 00:11:33.726 Variable Capacity Management: Not Supported 00:11:33.726 Delete Endurance Group: Not Supported 00:11:33.726 Delete NVM Set: Not Supported 00:11:33.726 Extended LBA Formats Supported: Supported 00:11:33.726 Flexible Data Placement Supported: Not Supported 00:11:33.726 00:11:33.726 Controller Memory Buffer Support 00:11:33.726 ================================ 00:11:33.726 Supported: No 00:11:33.726 00:11:33.726 Persistent Memory Region Support 00:11:33.726 ================================ 00:11:33.726 Supported: No 00:11:33.726 00:11:33.726 Admin Command Set Attributes 00:11:33.726 ============================ 00:11:33.726 Security Send/Receive: Not Supported 00:11:33.726 Format NVM: Supported 00:11:33.726 Firmware Activate/Download: Not Supported 00:11:33.726 Namespace Management: Supported 00:11:33.726 Device Self-Test: Not Supported 00:11:33.726 Directives: Supported 00:11:33.726 NVMe-MI: Not Supported 00:11:33.726 Virtualization Management: Not Supported 00:11:33.726 Doorbell Buffer Config: Supported 00:11:33.726 Get LBA Status Capability: Not Supported 00:11:33.726 Command & Feature Lockdown Capability: Not Supported 00:11:33.726 Abort Command Limit: 4 00:11:33.726 Async Event Request Limit: 4 00:11:33.726 Number of Firmware Slots: N/A 00:11:33.727 Firmware Slot 1 Read-Only: N/A 00:11:33.727 Firmware Activation Without Reset: N/A 00:11:33.727 Multiple Update Detection Support: N/A 00:11:33.727 Firmware Update Granularity: No Information Provided 00:11:33.727 Per-Namespace SMART Log: Yes 00:11:33.727 Asymmetric Namespace Access Log Page: Not Supported 00:11:33.727 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:33.727 Command Effects Log Page: Supported 00:11:33.727 Get Log Page Extended Data: Supported 00:11:33.727 Telemetry Log Pages: Not Supported 00:11:33.727 Persistent Event Log Pages: Not Supported 00:11:33.727 Supported Log Pages Log Page: May Support 00:11:33.727 Commands Supported & Effects Log Page: Not Supported 00:11:33.727 Feature Identifiers & Effects Log Page:May Support 00:11:33.727 NVMe-MI Commands & Effects Log Page: May Support 00:11:33.727 Data Area 4 for Telemetry Log: Not Supported 00:11:33.727 Error Log Page Entries Supported: 1 00:11:33.727 Keep Alive: Not Supported 00:11:33.727 00:11:33.727 NVM Command Set Attributes 00:11:33.727 ========================== 00:11:33.727 Submission Queue Entry Size 00:11:33.727 Max: 64 00:11:33.727 Min: 64 00:11:33.727 Completion Queue Entry Size 00:11:33.727 Max: 16 00:11:33.727 Min: 16 00:11:33.727 Number of Namespaces: 256 00:11:33.727 Compare Command: Supported 00:11:33.727 Write Uncorrectable Command: Not Supported 00:11:33.727 Dataset Management Command: Supported 00:11:33.727 Write Zeroes Command: Supported 00:11:33.727 Set Features Save Field: Supported 00:11:33.727 Reservations: Not Supported 00:11:33.727 Timestamp: Supported 00:11:33.727 Copy: Supported 00:11:33.727 Volatile Write Cache: Present 00:11:33.727 Atomic Write Unit (Normal): 1 00:11:33.727 Atomic Write Unit (PFail): 1 00:11:33.727 Atomic Compare & Write Unit: 1 00:11:33.727 Fused Compare & Write: Not Supported 00:11:33.727 Scatter-Gather List 00:11:33.727 SGL Command Set: Supported 00:11:33.727 SGL Keyed: Not Supported 00:11:33.727 SGL Bit Bucket Descriptor: Not Supported 00:11:33.727 SGL Metadata Pointer: Not Supported 00:11:33.727 Oversized SGL: Not Supported 00:11:33.727 SGL Metadata Address: Not Supported 00:11:33.727 SGL Offset: Not Supported 00:11:33.727 Transport SGL Data Block: Not Supported 00:11:33.727 Replay Protected Memory Block: Not Supported 00:11:33.727 00:11:33.727 Firmware Slot Information 00:11:33.727 ========================= 00:11:33.727 Active slot: 1 00:11:33.727 Slot 1 Firmware Revision: 1.0 00:11:33.727 00:11:33.727 00:11:33.727 Commands Supported and Effects 00:11:33.727 ============================== 00:11:33.727 Admin Commands 00:11:33.727 -------------- 00:11:33.727 Delete I/O Submission Queue (00h): Supported 00:11:33.727 Create I/O Submission Queue (01h): Supported 00:11:33.727 Get Log Page (02h): Supported 00:11:33.727 Delete I/O Completion Queue (04h): Supported 00:11:33.727 Create I/O Completion Queue (05h): Supported 00:11:33.727 Identify (06h): Supported 00:11:33.727 Abort (08h): Supported 00:11:33.727 Set Features (09h): Supported 00:11:33.727 Get Features (0Ah): Supported 00:11:33.727 Asynchronous Event Request (0Ch): Supported 00:11:33.727 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:33.727 Directive Send (19h): Supported 00:11:33.727 Directive Receive (1Ah): Supported 00:11:33.727 Virtualization Management (1Ch): Supported 00:11:33.727 Doorbell Buffer Config (7Ch): Supported 00:11:33.727 Format NVM (80h): Supported LBA-Change 00:11:33.727 I/O Commands 00:11:33.727 ------------ 00:11:33.727 Flush (00h): Supported LBA-Change 00:11:33.727 Write (01h): Supported LBA-Change 00:11:33.727 Read (02h): Supported 00:11:33.727 Compare (05h): Supported 00:11:33.727 Write Zeroes (08h): Supported LBA-Change 00:11:33.727 Dataset Management (09h): Supported LBA-Change 00:11:33.727 Unknown (0Ch): Supported 00:11:33.727 Unknown (12h): Supported 00:11:33.727 Copy (19h): Supported LBA-Change 00:11:33.727 Unknown (1Dh): Supported LBA-Change 00:11:33.727 00:11:33.727 Error Log 00:11:33.727 ========= 00:11:33.727 00:11:33.727 Arbitration 00:11:33.727 =========== 00:11:33.727 Arbitration Burst: no limit 00:11:33.727 00:11:33.727 Power Management 00:11:33.727 ================ 00:11:33.727 Number of Power States: 1 00:11:33.727 Current Power State: Power State #0 00:11:33.727 Power State #0: 00:11:33.727 Max Power: 25.00 W 00:11:33.727 Non-Operational State: Operational 00:11:33.727 Entry Latency: 16 microseconds 00:11:33.727 Exit Latency: 4 microseconds 00:11:33.727 Relative Read Throughput: 0 00:11:33.727 Relative Read Latency: 0 00:11:33.727 Relative Write Throughput: 0 00:11:33.727 Relative Write Latency: 0 00:11:33.727 Idle Power: Not Reported 00:11:33.727 Active Power: Not Reported 00:11:33.727 Non-Operational Permissive Mode: Not Supported 00:11:33.727 00:11:33.727 Health Information 00:11:33.727 ================== 00:11:33.727 Critical Warnings: 00:11:33.727 Available Spare Space: OK 00:11:33.727 Temperature: OK 00:11:33.727 Device Reliability: OK 00:11:33.727 Read Only: No 00:11:33.727 Volatile Memory Backup: OK 00:11:33.727 Current Temperature: 323 Kelvin (50 Celsius) 00:11:33.727 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:33.727 Available Spare: 0% 00:11:33.727 Available Spare Threshold: 0% 00:11:33.727 Life Percentage Used: 0% 00:11:33.727 Data Units Read: 2128 00:11:33.727 Data Units Written: 1809 00:11:33.727 Host Read Commands: 97612 00:11:33.727 Host Write Commands: 93382 00:11:33.727 Controller Busy Time: 0 minutes 00:11:33.727 Power Cycles: 0 00:11:33.727 Power On Hours: 0 hours 00:11:33.727 Unsafe Shutdowns: 0 00:11:33.727 Unrecoverable Media Errors: 0 00:11:33.727 Lifetime Error Log Entries: 0 00:11:33.727 Warning Temperature Time: 0 minutes 00:11:33.727 Critical Temperature Time: 0 minutes 00:11:33.727 00:11:33.727 Number of Queues 00:11:33.727 ================ 00:11:33.727 Number of I/O Submission Queues: 64 00:11:33.727 Number of I/O Completion Queues: 64 00:11:33.727 00:11:33.727 ZNS Specific Controller Data 00:11:33.727 ============================ 00:11:33.727 Zone Append Size Limit: 0 00:11:33.727 00:11:33.727 00:11:33.727 Active Namespaces 00:11:33.727 ================= 00:11:33.727 Namespace ID:1 00:11:33.727 Error Recovery Timeout: Unlimited 00:11:33.727 Command Set Identifier: NVM (00h) 00:11:33.727 Deallocate: Supported 00:11:33.727 Deallocated/Unwritten Error: Supported 00:11:33.727 Deallocated Read Value: All 0x00 00:11:33.727 Deallocate in Write Zeroes: Not Supported 00:11:33.727 Deallocated Guard Field: 0xFFFF 00:11:33.727 Flush: Supported 00:11:33.727 Reservation: Not Supported 00:11:33.727 Namespace Sharing Capabilities: Private 00:11:33.727 Size (in LBAs): 1048576 (4GiB) 00:11:33.727 Capacity (in LBAs): 1048576 (4GiB) 00:11:33.727 Utilization (in LBAs): 1048576 (4GiB) 00:11:33.727 Thin Provisioning: Not Supported 00:11:33.727 Per-NS Atomic Units: No 00:11:33.727 Maximum Single Source Range Length: 128 00:11:33.727 Maximum Copy Length: 128 00:11:33.727 Maximum Source Range Count: 128 00:11:33.727 NGUID/EUI64 Never Reused: No 00:11:33.727 Namespace Write Protected: No 00:11:33.727 Number of LBA Formats: 8 00:11:33.727 Current LBA Format: LBA Format #04 00:11:33.727 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:33.727 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:33.727 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:33.727 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:33.727 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:33.727 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:33.727 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:33.727 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:33.727 00:11:33.727 NVM Specific Namespace Data 00:11:33.727 =========================== 00:11:33.727 Logical Block Storage Tag Mask: 0 00:11:33.727 Protection Information Capabilities: 00:11:33.727 16b Guard Protection Information Storage Tag Support: No 00:11:33.727 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:33.727 Storage Tag Check Read Support: No 00:11:33.727 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.727 Namespace ID:2 00:11:33.727 Error Recovery Timeout: Unlimited 00:11:33.727 Command Set Identifier: NVM (00h) 00:11:33.728 Deallocate: Supported 00:11:33.728 Deallocated/Unwritten Error: Supported 00:11:33.728 Deallocated Read Value: All 0x00 00:11:33.728 Deallocate in Write Zeroes: Not Supported 00:11:33.728 Deallocated Guard Field: 0xFFFF 00:11:33.728 Flush: Supported 00:11:33.728 Reservation: Not Supported 00:11:33.728 Namespace Sharing Capabilities: Private 00:11:33.728 Size (in LBAs): 1048576 (4GiB) 00:11:33.728 Capacity (in LBAs): 1048576 (4GiB) 00:11:33.728 Utilization (in LBAs): 1048576 (4GiB) 00:11:33.728 Thin Provisioning: Not Supported 00:11:33.728 Per-NS Atomic Units: No 00:11:33.728 Maximum Single Source Range Length: 128 00:11:33.728 Maximum Copy Length: 128 00:11:33.728 Maximum Source Range Count: 128 00:11:33.728 NGUID/EUI64 Never Reused: No 00:11:33.728 Namespace Write Protected: No 00:11:33.728 Number of LBA Formats: 8 00:11:33.728 Current LBA Format: LBA Format #04 00:11:33.728 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:33.728 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:33.728 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:33.728 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:33.728 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:33.728 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:33.728 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:33.728 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:33.728 00:11:33.728 NVM Specific Namespace Data 00:11:33.728 =========================== 00:11:33.728 Logical Block Storage Tag Mask: 0 00:11:33.728 Protection Information Capabilities: 00:11:33.728 16b Guard Protection Information Storage Tag Support: No 00:11:33.728 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:33.728 Storage Tag Check Read Support: No 00:11:33.728 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Namespace ID:3 00:11:33.728 Error Recovery Timeout: Unlimited 00:11:33.728 Command Set Identifier: NVM (00h) 00:11:33.728 Deallocate: Supported 00:11:33.728 Deallocated/Unwritten Error: Supported 00:11:33.728 Deallocated Read Value: All 0x00 00:11:33.728 Deallocate in Write Zeroes: Not Supported 00:11:33.728 Deallocated Guard Field: 0xFFFF 00:11:33.728 Flush: Supported 00:11:33.728 Reservation: Not Supported 00:11:33.728 Namespace Sharing Capabilities: Private 00:11:33.728 Size (in LBAs): 1048576 (4GiB) 00:11:33.728 Capacity (in LBAs): 1048576 (4GiB) 00:11:33.728 Utilization (in LBAs): 1048576 (4GiB) 00:11:33.728 Thin Provisioning: Not Supported 00:11:33.728 Per-NS Atomic Units: No 00:11:33.728 Maximum Single Source Range Length: 128 00:11:33.728 Maximum Copy Length: 128 00:11:33.728 Maximum Source Range Count: 128 00:11:33.728 NGUID/EUI64 Never Reused: No 00:11:33.728 Namespace Write Protected: No 00:11:33.728 Number of LBA Formats: 8 00:11:33.728 Current LBA Format: LBA Format #04 00:11:33.728 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:33.728 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:33.728 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:33.728 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:33.728 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:33.728 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:33.728 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:33.728 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:33.728 00:11:33.728 NVM Specific Namespace Data 00:11:33.728 =========================== 00:11:33.728 Logical Block Storage Tag Mask: 0 00:11:33.728 Protection Information Capabilities: 00:11:33.728 16b Guard Protection Information Storage Tag Support: No 00:11:33.728 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:33.728 Storage Tag Check Read Support: No 00:11:33.728 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.728 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:33.728 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:33.988 ===================================================== 00:11:33.988 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:33.988 ===================================================== 00:11:33.988 Controller Capabilities/Features 00:11:33.988 ================================ 00:11:33.988 Vendor ID: 1b36 00:11:33.988 Subsystem Vendor ID: 1af4 00:11:33.988 Serial Number: 12340 00:11:33.988 Model Number: QEMU NVMe Ctrl 00:11:33.988 Firmware Version: 8.0.0 00:11:33.988 Recommended Arb Burst: 6 00:11:33.988 IEEE OUI Identifier: 00 54 52 00:11:33.988 Multi-path I/O 00:11:33.988 May have multiple subsystem ports: No 00:11:33.988 May have multiple controllers: No 00:11:33.988 Associated with SR-IOV VF: No 00:11:33.988 Max Data Transfer Size: 524288 00:11:33.988 Max Number of Namespaces: 256 00:11:33.988 Max Number of I/O Queues: 64 00:11:33.988 NVMe Specification Version (VS): 1.4 00:11:33.988 NVMe Specification Version (Identify): 1.4 00:11:33.988 Maximum Queue Entries: 2048 00:11:33.988 Contiguous Queues Required: Yes 00:11:33.988 Arbitration Mechanisms Supported 00:11:33.988 Weighted Round Robin: Not Supported 00:11:33.988 Vendor Specific: Not Supported 00:11:33.988 Reset Timeout: 7500 ms 00:11:33.988 Doorbell Stride: 4 bytes 00:11:33.988 NVM Subsystem Reset: Not Supported 00:11:33.988 Command Sets Supported 00:11:33.988 NVM Command Set: Supported 00:11:33.988 Boot Partition: Not Supported 00:11:33.988 Memory Page Size Minimum: 4096 bytes 00:11:33.988 Memory Page Size Maximum: 65536 bytes 00:11:33.988 Persistent Memory Region: Not Supported 00:11:33.988 Optional Asynchronous Events Supported 00:11:33.988 Namespace Attribute Notices: Supported 00:11:33.988 Firmware Activation Notices: Not Supported 00:11:33.988 ANA Change Notices: Not Supported 00:11:33.988 PLE Aggregate Log Change Notices: Not Supported 00:11:33.988 LBA Status Info Alert Notices: Not Supported 00:11:33.988 EGE Aggregate Log Change Notices: Not Supported 00:11:33.988 Normal NVM Subsystem Shutdown event: Not Supported 00:11:33.988 Zone Descriptor Change Notices: Not Supported 00:11:33.988 Discovery Log Change Notices: Not Supported 00:11:33.988 Controller Attributes 00:11:33.988 128-bit Host Identifier: Not Supported 00:11:33.988 Non-Operational Permissive Mode: Not Supported 00:11:33.988 NVM Sets: Not Supported 00:11:33.988 Read Recovery Levels: Not Supported 00:11:33.988 Endurance Groups: Not Supported 00:11:33.988 Predictable Latency Mode: Not Supported 00:11:33.988 Traffic Based Keep ALive: Not Supported 00:11:33.988 Namespace Granularity: Not Supported 00:11:33.988 SQ Associations: Not Supported 00:11:33.988 UUID List: Not Supported 00:11:33.988 Multi-Domain Subsystem: Not Supported 00:11:33.988 Fixed Capacity Management: Not Supported 00:11:33.988 Variable Capacity Management: Not Supported 00:11:33.988 Delete Endurance Group: Not Supported 00:11:33.988 Delete NVM Set: Not Supported 00:11:33.988 Extended LBA Formats Supported: Supported 00:11:33.988 Flexible Data Placement Supported: Not Supported 00:11:33.988 00:11:33.988 Controller Memory Buffer Support 00:11:33.988 ================================ 00:11:33.988 Supported: No 00:11:33.988 00:11:33.988 Persistent Memory Region Support 00:11:33.988 ================================ 00:11:33.988 Supported: No 00:11:33.988 00:11:33.988 Admin Command Set Attributes 00:11:33.988 ============================ 00:11:33.988 Security Send/Receive: Not Supported 00:11:33.988 Format NVM: Supported 00:11:33.988 Firmware Activate/Download: Not Supported 00:11:33.988 Namespace Management: Supported 00:11:33.988 Device Self-Test: Not Supported 00:11:33.988 Directives: Supported 00:11:33.988 NVMe-MI: Not Supported 00:11:33.988 Virtualization Management: Not Supported 00:11:33.988 Doorbell Buffer Config: Supported 00:11:33.988 Get LBA Status Capability: Not Supported 00:11:33.988 Command & Feature Lockdown Capability: Not Supported 00:11:33.988 Abort Command Limit: 4 00:11:33.988 Async Event Request Limit: 4 00:11:33.988 Number of Firmware Slots: N/A 00:11:33.988 Firmware Slot 1 Read-Only: N/A 00:11:33.988 Firmware Activation Without Reset: N/A 00:11:33.988 Multiple Update Detection Support: N/A 00:11:33.988 Firmware Update Granularity: No Information Provided 00:11:33.988 Per-Namespace SMART Log: Yes 00:11:33.988 Asymmetric Namespace Access Log Page: Not Supported 00:11:33.988 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:33.988 Command Effects Log Page: Supported 00:11:33.988 Get Log Page Extended Data: Supported 00:11:33.988 Telemetry Log Pages: Not Supported 00:11:33.988 Persistent Event Log Pages: Not Supported 00:11:33.988 Supported Log Pages Log Page: May Support 00:11:33.988 Commands Supported & Effects Log Page: Not Supported 00:11:33.988 Feature Identifiers & Effects Log Page:May Support 00:11:33.988 NVMe-MI Commands & Effects Log Page: May Support 00:11:33.988 Data Area 4 for Telemetry Log: Not Supported 00:11:33.988 Error Log Page Entries Supported: 1 00:11:33.988 Keep Alive: Not Supported 00:11:33.988 00:11:33.988 NVM Command Set Attributes 00:11:33.988 ========================== 00:11:33.988 Submission Queue Entry Size 00:11:33.988 Max: 64 00:11:33.988 Min: 64 00:11:33.988 Completion Queue Entry Size 00:11:33.988 Max: 16 00:11:33.988 Min: 16 00:11:33.988 Number of Namespaces: 256 00:11:33.988 Compare Command: Supported 00:11:33.988 Write Uncorrectable Command: Not Supported 00:11:33.988 Dataset Management Command: Supported 00:11:33.988 Write Zeroes Command: Supported 00:11:33.988 Set Features Save Field: Supported 00:11:33.988 Reservations: Not Supported 00:11:33.988 Timestamp: Supported 00:11:33.988 Copy: Supported 00:11:33.988 Volatile Write Cache: Present 00:11:33.988 Atomic Write Unit (Normal): 1 00:11:33.988 Atomic Write Unit (PFail): 1 00:11:33.988 Atomic Compare & Write Unit: 1 00:11:33.988 Fused Compare & Write: Not Supported 00:11:33.988 Scatter-Gather List 00:11:33.988 SGL Command Set: Supported 00:11:33.988 SGL Keyed: Not Supported 00:11:33.988 SGL Bit Bucket Descriptor: Not Supported 00:11:33.988 SGL Metadata Pointer: Not Supported 00:11:33.988 Oversized SGL: Not Supported 00:11:33.988 SGL Metadata Address: Not Supported 00:11:33.988 SGL Offset: Not Supported 00:11:33.988 Transport SGL Data Block: Not Supported 00:11:33.988 Replay Protected Memory Block: Not Supported 00:11:33.988 00:11:33.988 Firmware Slot Information 00:11:33.988 ========================= 00:11:33.988 Active slot: 1 00:11:33.988 Slot 1 Firmware Revision: 1.0 00:11:33.988 00:11:33.988 00:11:33.988 Commands Supported and Effects 00:11:33.988 ============================== 00:11:33.988 Admin Commands 00:11:33.988 -------------- 00:11:33.988 Delete I/O Submission Queue (00h): Supported 00:11:33.989 Create I/O Submission Queue (01h): Supported 00:11:33.989 Get Log Page (02h): Supported 00:11:33.989 Delete I/O Completion Queue (04h): Supported 00:11:33.989 Create I/O Completion Queue (05h): Supported 00:11:33.989 Identify (06h): Supported 00:11:33.989 Abort (08h): Supported 00:11:33.989 Set Features (09h): Supported 00:11:33.989 Get Features (0Ah): Supported 00:11:33.989 Asynchronous Event Request (0Ch): Supported 00:11:33.989 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:33.989 Directive Send (19h): Supported 00:11:33.989 Directive Receive (1Ah): Supported 00:11:33.989 Virtualization Management (1Ch): Supported 00:11:33.989 Doorbell Buffer Config (7Ch): Supported 00:11:33.989 Format NVM (80h): Supported LBA-Change 00:11:33.989 I/O Commands 00:11:33.989 ------------ 00:11:33.989 Flush (00h): Supported LBA-Change 00:11:33.989 Write (01h): Supported LBA-Change 00:11:33.989 Read (02h): Supported 00:11:33.989 Compare (05h): Supported 00:11:33.989 Write Zeroes (08h): Supported LBA-Change 00:11:33.989 Dataset Management (09h): Supported LBA-Change 00:11:33.989 Unknown (0Ch): Supported 00:11:33.989 Unknown (12h): Supported 00:11:33.989 Copy (19h): Supported LBA-Change 00:11:33.989 Unknown (1Dh): Supported LBA-Change 00:11:33.989 00:11:33.989 Error Log 00:11:33.989 ========= 00:11:33.989 00:11:33.989 Arbitration 00:11:33.989 =========== 00:11:33.989 Arbitration Burst: no limit 00:11:33.989 00:11:33.989 Power Management 00:11:33.989 ================ 00:11:33.989 Number of Power States: 1 00:11:33.989 Current Power State: Power State #0 00:11:33.989 Power State #0: 00:11:33.989 Max Power: 25.00 W 00:11:33.989 Non-Operational State: Operational 00:11:33.989 Entry Latency: 16 microseconds 00:11:33.989 Exit Latency: 4 microseconds 00:11:33.989 Relative Read Throughput: 0 00:11:33.989 Relative Read Latency: 0 00:11:33.989 Relative Write Throughput: 0 00:11:33.989 Relative Write Latency: 0 00:11:33.989 Idle Power: Not Reported 00:11:33.989 Active Power: Not Reported 00:11:33.989 Non-Operational Permissive Mode: Not Supported 00:11:33.989 00:11:33.989 Health Information 00:11:33.989 ================== 00:11:33.989 Critical Warnings: 00:11:33.989 Available Spare Space: OK 00:11:33.989 Temperature: OK 00:11:33.989 Device Reliability: OK 00:11:33.989 Read Only: No 00:11:33.989 Volatile Memory Backup: OK 00:11:33.989 Current Temperature: 323 Kelvin (50 Celsius) 00:11:33.989 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:33.989 Available Spare: 0% 00:11:33.989 Available Spare Threshold: 0% 00:11:33.989 Life Percentage Used: 0% 00:11:33.989 Data Units Read: 677 00:11:33.989 Data Units Written: 568 00:11:33.989 Host Read Commands: 31990 00:11:33.989 Host Write Commands: 31028 00:11:33.989 Controller Busy Time: 0 minutes 00:11:33.989 Power Cycles: 0 00:11:33.989 Power On Hours: 0 hours 00:11:33.989 Unsafe Shutdowns: 0 00:11:33.989 Unrecoverable Media Errors: 0 00:11:33.989 Lifetime Error Log Entries: 0 00:11:33.989 Warning Temperature Time: 0 minutes 00:11:33.989 Critical Temperature Time: 0 minutes 00:11:33.989 00:11:33.989 Number of Queues 00:11:33.989 ================ 00:11:33.989 Number of I/O Submission Queues: 64 00:11:33.989 Number of I/O Completion Queues: 64 00:11:33.989 00:11:33.989 ZNS Specific Controller Data 00:11:33.989 ============================ 00:11:33.989 Zone Append Size Limit: 0 00:11:33.989 00:11:33.989 00:11:33.989 Active Namespaces 00:11:33.989 ================= 00:11:33.989 Namespace ID:1 00:11:33.989 Error Recovery Timeout: Unlimited 00:11:33.989 Command Set Identifier: NVM (00h) 00:11:33.989 Deallocate: Supported 00:11:33.989 Deallocated/Unwritten Error: Supported 00:11:33.989 Deallocated Read Value: All 0x00 00:11:33.989 Deallocate in Write Zeroes: Not Supported 00:11:33.989 Deallocated Guard Field: 0xFFFF 00:11:33.989 Flush: Supported 00:11:33.989 Reservation: Not Supported 00:11:33.989 Metadata Transferred as: Separate Metadata Buffer 00:11:33.989 Namespace Sharing Capabilities: Private 00:11:33.989 Size (in LBAs): 1548666 (5GiB) 00:11:33.989 Capacity (in LBAs): 1548666 (5GiB) 00:11:33.989 Utilization (in LBAs): 1548666 (5GiB) 00:11:33.989 Thin Provisioning: Not Supported 00:11:33.989 Per-NS Atomic Units: No 00:11:33.989 Maximum Single Source Range Length: 128 00:11:33.989 Maximum Copy Length: 128 00:11:33.989 Maximum Source Range Count: 128 00:11:33.989 NGUID/EUI64 Never Reused: No 00:11:33.989 Namespace Write Protected: No 00:11:33.989 Number of LBA Formats: 8 00:11:33.989 Current LBA Format: LBA Format #07 00:11:33.989 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:33.989 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:33.989 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:33.989 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:33.989 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:33.989 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:33.989 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:33.989 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:33.989 00:11:33.989 NVM Specific Namespace Data 00:11:33.989 =========================== 00:11:33.989 Logical Block Storage Tag Mask: 0 00:11:33.989 Protection Information Capabilities: 00:11:33.989 16b Guard Protection Information Storage Tag Support: No 00:11:33.989 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:33.989 Storage Tag Check Read Support: No 00:11:33.989 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:33.989 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:33.989 18:18:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:34.249 ===================================================== 00:11:34.249 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:34.249 ===================================================== 00:11:34.249 Controller Capabilities/Features 00:11:34.249 ================================ 00:11:34.249 Vendor ID: 1b36 00:11:34.249 Subsystem Vendor ID: 1af4 00:11:34.249 Serial Number: 12341 00:11:34.249 Model Number: QEMU NVMe Ctrl 00:11:34.249 Firmware Version: 8.0.0 00:11:34.249 Recommended Arb Burst: 6 00:11:34.249 IEEE OUI Identifier: 00 54 52 00:11:34.249 Multi-path I/O 00:11:34.249 May have multiple subsystem ports: No 00:11:34.249 May have multiple controllers: No 00:11:34.249 Associated with SR-IOV VF: No 00:11:34.249 Max Data Transfer Size: 524288 00:11:34.249 Max Number of Namespaces: 256 00:11:34.249 Max Number of I/O Queues: 64 00:11:34.249 NVMe Specification Version (VS): 1.4 00:11:34.249 NVMe Specification Version (Identify): 1.4 00:11:34.249 Maximum Queue Entries: 2048 00:11:34.249 Contiguous Queues Required: Yes 00:11:34.249 Arbitration Mechanisms Supported 00:11:34.249 Weighted Round Robin: Not Supported 00:11:34.249 Vendor Specific: Not Supported 00:11:34.249 Reset Timeout: 7500 ms 00:11:34.249 Doorbell Stride: 4 bytes 00:11:34.249 NVM Subsystem Reset: Not Supported 00:11:34.249 Command Sets Supported 00:11:34.249 NVM Command Set: Supported 00:11:34.249 Boot Partition: Not Supported 00:11:34.249 Memory Page Size Minimum: 4096 bytes 00:11:34.249 Memory Page Size Maximum: 65536 bytes 00:11:34.249 Persistent Memory Region: Not Supported 00:11:34.249 Optional Asynchronous Events Supported 00:11:34.249 Namespace Attribute Notices: Supported 00:11:34.249 Firmware Activation Notices: Not Supported 00:11:34.249 ANA Change Notices: Not Supported 00:11:34.249 PLE Aggregate Log Change Notices: Not Supported 00:11:34.249 LBA Status Info Alert Notices: Not Supported 00:11:34.249 EGE Aggregate Log Change Notices: Not Supported 00:11:34.249 Normal NVM Subsystem Shutdown event: Not Supported 00:11:34.249 Zone Descriptor Change Notices: Not Supported 00:11:34.249 Discovery Log Change Notices: Not Supported 00:11:34.249 Controller Attributes 00:11:34.249 128-bit Host Identifier: Not Supported 00:11:34.249 Non-Operational Permissive Mode: Not Supported 00:11:34.249 NVM Sets: Not Supported 00:11:34.249 Read Recovery Levels: Not Supported 00:11:34.249 Endurance Groups: Not Supported 00:11:34.249 Predictable Latency Mode: Not Supported 00:11:34.249 Traffic Based Keep ALive: Not Supported 00:11:34.249 Namespace Granularity: Not Supported 00:11:34.249 SQ Associations: Not Supported 00:11:34.249 UUID List: Not Supported 00:11:34.249 Multi-Domain Subsystem: Not Supported 00:11:34.249 Fixed Capacity Management: Not Supported 00:11:34.249 Variable Capacity Management: Not Supported 00:11:34.249 Delete Endurance Group: Not Supported 00:11:34.249 Delete NVM Set: Not Supported 00:11:34.249 Extended LBA Formats Supported: Supported 00:11:34.249 Flexible Data Placement Supported: Not Supported 00:11:34.249 00:11:34.249 Controller Memory Buffer Support 00:11:34.249 ================================ 00:11:34.249 Supported: No 00:11:34.249 00:11:34.249 Persistent Memory Region Support 00:11:34.249 ================================ 00:11:34.249 Supported: No 00:11:34.249 00:11:34.249 Admin Command Set Attributes 00:11:34.249 ============================ 00:11:34.249 Security Send/Receive: Not Supported 00:11:34.249 Format NVM: Supported 00:11:34.249 Firmware Activate/Download: Not Supported 00:11:34.249 Namespace Management: Supported 00:11:34.249 Device Self-Test: Not Supported 00:11:34.249 Directives: Supported 00:11:34.249 NVMe-MI: Not Supported 00:11:34.249 Virtualization Management: Not Supported 00:11:34.249 Doorbell Buffer Config: Supported 00:11:34.249 Get LBA Status Capability: Not Supported 00:11:34.249 Command & Feature Lockdown Capability: Not Supported 00:11:34.249 Abort Command Limit: 4 00:11:34.249 Async Event Request Limit: 4 00:11:34.249 Number of Firmware Slots: N/A 00:11:34.249 Firmware Slot 1 Read-Only: N/A 00:11:34.249 Firmware Activation Without Reset: N/A 00:11:34.249 Multiple Update Detection Support: N/A 00:11:34.249 Firmware Update Granularity: No Information Provided 00:11:34.249 Per-Namespace SMART Log: Yes 00:11:34.249 Asymmetric Namespace Access Log Page: Not Supported 00:11:34.249 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:34.249 Command Effects Log Page: Supported 00:11:34.249 Get Log Page Extended Data: Supported 00:11:34.249 Telemetry Log Pages: Not Supported 00:11:34.249 Persistent Event Log Pages: Not Supported 00:11:34.249 Supported Log Pages Log Page: May Support 00:11:34.249 Commands Supported & Effects Log Page: Not Supported 00:11:34.249 Feature Identifiers & Effects Log Page:May Support 00:11:34.249 NVMe-MI Commands & Effects Log Page: May Support 00:11:34.249 Data Area 4 for Telemetry Log: Not Supported 00:11:34.249 Error Log Page Entries Supported: 1 00:11:34.249 Keep Alive: Not Supported 00:11:34.249 00:11:34.249 NVM Command Set Attributes 00:11:34.249 ========================== 00:11:34.249 Submission Queue Entry Size 00:11:34.249 Max: 64 00:11:34.249 Min: 64 00:11:34.249 Completion Queue Entry Size 00:11:34.249 Max: 16 00:11:34.249 Min: 16 00:11:34.249 Number of Namespaces: 256 00:11:34.249 Compare Command: Supported 00:11:34.249 Write Uncorrectable Command: Not Supported 00:11:34.249 Dataset Management Command: Supported 00:11:34.249 Write Zeroes Command: Supported 00:11:34.249 Set Features Save Field: Supported 00:11:34.249 Reservations: Not Supported 00:11:34.249 Timestamp: Supported 00:11:34.249 Copy: Supported 00:11:34.249 Volatile Write Cache: Present 00:11:34.249 Atomic Write Unit (Normal): 1 00:11:34.249 Atomic Write Unit (PFail): 1 00:11:34.249 Atomic Compare & Write Unit: 1 00:11:34.249 Fused Compare & Write: Not Supported 00:11:34.249 Scatter-Gather List 00:11:34.249 SGL Command Set: Supported 00:11:34.249 SGL Keyed: Not Supported 00:11:34.249 SGL Bit Bucket Descriptor: Not Supported 00:11:34.249 SGL Metadata Pointer: Not Supported 00:11:34.249 Oversized SGL: Not Supported 00:11:34.249 SGL Metadata Address: Not Supported 00:11:34.249 SGL Offset: Not Supported 00:11:34.249 Transport SGL Data Block: Not Supported 00:11:34.249 Replay Protected Memory Block: Not Supported 00:11:34.249 00:11:34.249 Firmware Slot Information 00:11:34.249 ========================= 00:11:34.249 Active slot: 1 00:11:34.249 Slot 1 Firmware Revision: 1.0 00:11:34.249 00:11:34.249 00:11:34.249 Commands Supported and Effects 00:11:34.249 ============================== 00:11:34.249 Admin Commands 00:11:34.249 -------------- 00:11:34.249 Delete I/O Submission Queue (00h): Supported 00:11:34.249 Create I/O Submission Queue (01h): Supported 00:11:34.249 Get Log Page (02h): Supported 00:11:34.249 Delete I/O Completion Queue (04h): Supported 00:11:34.249 Create I/O Completion Queue (05h): Supported 00:11:34.250 Identify (06h): Supported 00:11:34.250 Abort (08h): Supported 00:11:34.250 Set Features (09h): Supported 00:11:34.250 Get Features (0Ah): Supported 00:11:34.250 Asynchronous Event Request (0Ch): Supported 00:11:34.250 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:34.250 Directive Send (19h): Supported 00:11:34.250 Directive Receive (1Ah): Supported 00:11:34.250 Virtualization Management (1Ch): Supported 00:11:34.250 Doorbell Buffer Config (7Ch): Supported 00:11:34.250 Format NVM (80h): Supported LBA-Change 00:11:34.250 I/O Commands 00:11:34.250 ------------ 00:11:34.250 Flush (00h): Supported LBA-Change 00:11:34.250 Write (01h): Supported LBA-Change 00:11:34.250 Read (02h): Supported 00:11:34.250 Compare (05h): Supported 00:11:34.250 Write Zeroes (08h): Supported LBA-Change 00:11:34.250 Dataset Management (09h): Supported LBA-Change 00:11:34.250 Unknown (0Ch): Supported 00:11:34.250 Unknown (12h): Supported 00:11:34.250 Copy (19h): Supported LBA-Change 00:11:34.250 Unknown (1Dh): Supported LBA-Change 00:11:34.250 00:11:34.250 Error Log 00:11:34.250 ========= 00:11:34.250 00:11:34.250 Arbitration 00:11:34.250 =========== 00:11:34.250 Arbitration Burst: no limit 00:11:34.250 00:11:34.250 Power Management 00:11:34.250 ================ 00:11:34.250 Number of Power States: 1 00:11:34.250 Current Power State: Power State #0 00:11:34.250 Power State #0: 00:11:34.250 Max Power: 25.00 W 00:11:34.250 Non-Operational State: Operational 00:11:34.250 Entry Latency: 16 microseconds 00:11:34.250 Exit Latency: 4 microseconds 00:11:34.250 Relative Read Throughput: 0 00:11:34.250 Relative Read Latency: 0 00:11:34.250 Relative Write Throughput: 0 00:11:34.250 Relative Write Latency: 0 00:11:34.250 Idle Power: Not Reported 00:11:34.250 Active Power: Not Reported 00:11:34.250 Non-Operational Permissive Mode: Not Supported 00:11:34.250 00:11:34.250 Health Information 00:11:34.250 ================== 00:11:34.250 Critical Warnings: 00:11:34.250 Available Spare Space: OK 00:11:34.250 Temperature: OK 00:11:34.250 Device Reliability: OK 00:11:34.250 Read Only: No 00:11:34.250 Volatile Memory Backup: OK 00:11:34.250 Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.250 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:34.250 Available Spare: 0% 00:11:34.250 Available Spare Threshold: 0% 00:11:34.250 Life Percentage Used: 0% 00:11:34.250 Data Units Read: 1051 00:11:34.250 Data Units Written: 841 00:11:34.250 Host Read Commands: 47970 00:11:34.250 Host Write Commands: 45092 00:11:34.250 Controller Busy Time: 0 minutes 00:11:34.250 Power Cycles: 0 00:11:34.250 Power On Hours: 0 hours 00:11:34.250 Unsafe Shutdowns: 0 00:11:34.250 Unrecoverable Media Errors: 0 00:11:34.250 Lifetime Error Log Entries: 0 00:11:34.250 Warning Temperature Time: 0 minutes 00:11:34.250 Critical Temperature Time: 0 minutes 00:11:34.250 00:11:34.250 Number of Queues 00:11:34.250 ================ 00:11:34.250 Number of I/O Submission Queues: 64 00:11:34.250 Number of I/O Completion Queues: 64 00:11:34.250 00:11:34.250 ZNS Specific Controller Data 00:11:34.250 ============================ 00:11:34.250 Zone Append Size Limit: 0 00:11:34.250 00:11:34.250 00:11:34.250 Active Namespaces 00:11:34.250 ================= 00:11:34.250 Namespace ID:1 00:11:34.250 Error Recovery Timeout: Unlimited 00:11:34.250 Command Set Identifier: NVM (00h) 00:11:34.250 Deallocate: Supported 00:11:34.250 Deallocated/Unwritten Error: Supported 00:11:34.250 Deallocated Read Value: All 0x00 00:11:34.250 Deallocate in Write Zeroes: Not Supported 00:11:34.250 Deallocated Guard Field: 0xFFFF 00:11:34.250 Flush: Supported 00:11:34.250 Reservation: Not Supported 00:11:34.250 Namespace Sharing Capabilities: Private 00:11:34.250 Size (in LBAs): 1310720 (5GiB) 00:11:34.250 Capacity (in LBAs): 1310720 (5GiB) 00:11:34.250 Utilization (in LBAs): 1310720 (5GiB) 00:11:34.250 Thin Provisioning: Not Supported 00:11:34.250 Per-NS Atomic Units: No 00:11:34.250 Maximum Single Source Range Length: 128 00:11:34.250 Maximum Copy Length: 128 00:11:34.250 Maximum Source Range Count: 128 00:11:34.250 NGUID/EUI64 Never Reused: No 00:11:34.250 Namespace Write Protected: No 00:11:34.250 Number of LBA Formats: 8 00:11:34.250 Current LBA Format: LBA Format #04 00:11:34.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:34.250 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:34.250 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:34.250 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:34.250 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:34.250 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:34.250 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:34.250 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:34.250 00:11:34.250 NVM Specific Namespace Data 00:11:34.250 =========================== 00:11:34.250 Logical Block Storage Tag Mask: 0 00:11:34.250 Protection Information Capabilities: 00:11:34.250 16b Guard Protection Information Storage Tag Support: No 00:11:34.250 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:34.250 Storage Tag Check Read Support: No 00:11:34.250 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.250 18:18:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:34.250 18:18:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:34.509 ===================================================== 00:11:34.509 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:34.509 ===================================================== 00:11:34.509 Controller Capabilities/Features 00:11:34.509 ================================ 00:11:34.509 Vendor ID: 1b36 00:11:34.509 Subsystem Vendor ID: 1af4 00:11:34.509 Serial Number: 12342 00:11:34.509 Model Number: QEMU NVMe Ctrl 00:11:34.509 Firmware Version: 8.0.0 00:11:34.509 Recommended Arb Burst: 6 00:11:34.509 IEEE OUI Identifier: 00 54 52 00:11:34.509 Multi-path I/O 00:11:34.509 May have multiple subsystem ports: No 00:11:34.509 May have multiple controllers: No 00:11:34.509 Associated with SR-IOV VF: No 00:11:34.509 Max Data Transfer Size: 524288 00:11:34.509 Max Number of Namespaces: 256 00:11:34.509 Max Number of I/O Queues: 64 00:11:34.509 NVMe Specification Version (VS): 1.4 00:11:34.509 NVMe Specification Version (Identify): 1.4 00:11:34.509 Maximum Queue Entries: 2048 00:11:34.509 Contiguous Queues Required: Yes 00:11:34.509 Arbitration Mechanisms Supported 00:11:34.509 Weighted Round Robin: Not Supported 00:11:34.509 Vendor Specific: Not Supported 00:11:34.509 Reset Timeout: 7500 ms 00:11:34.509 Doorbell Stride: 4 bytes 00:11:34.509 NVM Subsystem Reset: Not Supported 00:11:34.509 Command Sets Supported 00:11:34.509 NVM Command Set: Supported 00:11:34.509 Boot Partition: Not Supported 00:11:34.509 Memory Page Size Minimum: 4096 bytes 00:11:34.509 Memory Page Size Maximum: 65536 bytes 00:11:34.509 Persistent Memory Region: Not Supported 00:11:34.509 Optional Asynchronous Events Supported 00:11:34.509 Namespace Attribute Notices: Supported 00:11:34.509 Firmware Activation Notices: Not Supported 00:11:34.509 ANA Change Notices: Not Supported 00:11:34.509 PLE Aggregate Log Change Notices: Not Supported 00:11:34.509 LBA Status Info Alert Notices: Not Supported 00:11:34.509 EGE Aggregate Log Change Notices: Not Supported 00:11:34.509 Normal NVM Subsystem Shutdown event: Not Supported 00:11:34.509 Zone Descriptor Change Notices: Not Supported 00:11:34.509 Discovery Log Change Notices: Not Supported 00:11:34.509 Controller Attributes 00:11:34.509 128-bit Host Identifier: Not Supported 00:11:34.509 Non-Operational Permissive Mode: Not Supported 00:11:34.509 NVM Sets: Not Supported 00:11:34.509 Read Recovery Levels: Not Supported 00:11:34.509 Endurance Groups: Not Supported 00:11:34.509 Predictable Latency Mode: Not Supported 00:11:34.509 Traffic Based Keep ALive: Not Supported 00:11:34.509 Namespace Granularity: Not Supported 00:11:34.509 SQ Associations: Not Supported 00:11:34.509 UUID List: Not Supported 00:11:34.509 Multi-Domain Subsystem: Not Supported 00:11:34.509 Fixed Capacity Management: Not Supported 00:11:34.509 Variable Capacity Management: Not Supported 00:11:34.509 Delete Endurance Group: Not Supported 00:11:34.509 Delete NVM Set: Not Supported 00:11:34.509 Extended LBA Formats Supported: Supported 00:11:34.509 Flexible Data Placement Supported: Not Supported 00:11:34.509 00:11:34.509 Controller Memory Buffer Support 00:11:34.509 ================================ 00:11:34.509 Supported: No 00:11:34.509 00:11:34.509 Persistent Memory Region Support 00:11:34.509 ================================ 00:11:34.509 Supported: No 00:11:34.509 00:11:34.509 Admin Command Set Attributes 00:11:34.509 ============================ 00:11:34.509 Security Send/Receive: Not Supported 00:11:34.509 Format NVM: Supported 00:11:34.509 Firmware Activate/Download: Not Supported 00:11:34.509 Namespace Management: Supported 00:11:34.509 Device Self-Test: Not Supported 00:11:34.509 Directives: Supported 00:11:34.509 NVMe-MI: Not Supported 00:11:34.509 Virtualization Management: Not Supported 00:11:34.510 Doorbell Buffer Config: Supported 00:11:34.510 Get LBA Status Capability: Not Supported 00:11:34.510 Command & Feature Lockdown Capability: Not Supported 00:11:34.510 Abort Command Limit: 4 00:11:34.510 Async Event Request Limit: 4 00:11:34.510 Number of Firmware Slots: N/A 00:11:34.510 Firmware Slot 1 Read-Only: N/A 00:11:34.510 Firmware Activation Without Reset: N/A 00:11:34.510 Multiple Update Detection Support: N/A 00:11:34.769 Firmware Update Granularity: No Information Provided 00:11:34.769 Per-Namespace SMART Log: Yes 00:11:34.769 Asymmetric Namespace Access Log Page: Not Supported 00:11:34.769 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:34.769 Command Effects Log Page: Supported 00:11:34.769 Get Log Page Extended Data: Supported 00:11:34.769 Telemetry Log Pages: Not Supported 00:11:34.769 Persistent Event Log Pages: Not Supported 00:11:34.769 Supported Log Pages Log Page: May Support 00:11:34.769 Commands Supported & Effects Log Page: Not Supported 00:11:34.769 Feature Identifiers & Effects Log Page:May Support 00:11:34.769 NVMe-MI Commands & Effects Log Page: May Support 00:11:34.769 Data Area 4 for Telemetry Log: Not Supported 00:11:34.769 Error Log Page Entries Supported: 1 00:11:34.769 Keep Alive: Not Supported 00:11:34.769 00:11:34.769 NVM Command Set Attributes 00:11:34.769 ========================== 00:11:34.769 Submission Queue Entry Size 00:11:34.769 Max: 64 00:11:34.769 Min: 64 00:11:34.769 Completion Queue Entry Size 00:11:34.769 Max: 16 00:11:34.769 Min: 16 00:11:34.769 Number of Namespaces: 256 00:11:34.769 Compare Command: Supported 00:11:34.769 Write Uncorrectable Command: Not Supported 00:11:34.769 Dataset Management Command: Supported 00:11:34.769 Write Zeroes Command: Supported 00:11:34.769 Set Features Save Field: Supported 00:11:34.769 Reservations: Not Supported 00:11:34.769 Timestamp: Supported 00:11:34.769 Copy: Supported 00:11:34.769 Volatile Write Cache: Present 00:11:34.769 Atomic Write Unit (Normal): 1 00:11:34.769 Atomic Write Unit (PFail): 1 00:11:34.769 Atomic Compare & Write Unit: 1 00:11:34.769 Fused Compare & Write: Not Supported 00:11:34.769 Scatter-Gather List 00:11:34.769 SGL Command Set: Supported 00:11:34.769 SGL Keyed: Not Supported 00:11:34.769 SGL Bit Bucket Descriptor: Not Supported 00:11:34.769 SGL Metadata Pointer: Not Supported 00:11:34.769 Oversized SGL: Not Supported 00:11:34.769 SGL Metadata Address: Not Supported 00:11:34.769 SGL Offset: Not Supported 00:11:34.769 Transport SGL Data Block: Not Supported 00:11:34.769 Replay Protected Memory Block: Not Supported 00:11:34.769 00:11:34.769 Firmware Slot Information 00:11:34.769 ========================= 00:11:34.769 Active slot: 1 00:11:34.769 Slot 1 Firmware Revision: 1.0 00:11:34.769 00:11:34.769 00:11:34.769 Commands Supported and Effects 00:11:34.769 ============================== 00:11:34.769 Admin Commands 00:11:34.769 -------------- 00:11:34.769 Delete I/O Submission Queue (00h): Supported 00:11:34.769 Create I/O Submission Queue (01h): Supported 00:11:34.769 Get Log Page (02h): Supported 00:11:34.769 Delete I/O Completion Queue (04h): Supported 00:11:34.769 Create I/O Completion Queue (05h): Supported 00:11:34.769 Identify (06h): Supported 00:11:34.769 Abort (08h): Supported 00:11:34.769 Set Features (09h): Supported 00:11:34.769 Get Features (0Ah): Supported 00:11:34.769 Asynchronous Event Request (0Ch): Supported 00:11:34.769 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:34.769 Directive Send (19h): Supported 00:11:34.769 Directive Receive (1Ah): Supported 00:11:34.769 Virtualization Management (1Ch): Supported 00:11:34.769 Doorbell Buffer Config (7Ch): Supported 00:11:34.769 Format NVM (80h): Supported LBA-Change 00:11:34.769 I/O Commands 00:11:34.769 ------------ 00:11:34.769 Flush (00h): Supported LBA-Change 00:11:34.769 Write (01h): Supported LBA-Change 00:11:34.769 Read (02h): Supported 00:11:34.769 Compare (05h): Supported 00:11:34.769 Write Zeroes (08h): Supported LBA-Change 00:11:34.770 Dataset Management (09h): Supported LBA-Change 00:11:34.770 Unknown (0Ch): Supported 00:11:34.770 Unknown (12h): Supported 00:11:34.770 Copy (19h): Supported LBA-Change 00:11:34.770 Unknown (1Dh): Supported LBA-Change 00:11:34.770 00:11:34.770 Error Log 00:11:34.770 ========= 00:11:34.770 00:11:34.770 Arbitration 00:11:34.770 =========== 00:11:34.770 Arbitration Burst: no limit 00:11:34.770 00:11:34.770 Power Management 00:11:34.770 ================ 00:11:34.770 Number of Power States: 1 00:11:34.770 Current Power State: Power State #0 00:11:34.770 Power State #0: 00:11:34.770 Max Power: 25.00 W 00:11:34.770 Non-Operational State: Operational 00:11:34.770 Entry Latency: 16 microseconds 00:11:34.770 Exit Latency: 4 microseconds 00:11:34.770 Relative Read Throughput: 0 00:11:34.770 Relative Read Latency: 0 00:11:34.770 Relative Write Throughput: 0 00:11:34.770 Relative Write Latency: 0 00:11:34.770 Idle Power: Not Reported 00:11:34.770 Active Power: Not Reported 00:11:34.770 Non-Operational Permissive Mode: Not Supported 00:11:34.770 00:11:34.770 Health Information 00:11:34.770 ================== 00:11:34.770 Critical Warnings: 00:11:34.770 Available Spare Space: OK 00:11:34.770 Temperature: OK 00:11:34.770 Device Reliability: OK 00:11:34.770 Read Only: No 00:11:34.770 Volatile Memory Backup: OK 00:11:34.770 Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.770 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:34.770 Available Spare: 0% 00:11:34.770 Available Spare Threshold: 0% 00:11:34.770 Life Percentage Used: 0% 00:11:34.770 Data Units Read: 2128 00:11:34.770 Data Units Written: 1809 00:11:34.770 Host Read Commands: 97612 00:11:34.770 Host Write Commands: 93382 00:11:34.770 Controller Busy Time: 0 minutes 00:11:34.770 Power Cycles: 0 00:11:34.770 Power On Hours: 0 hours 00:11:34.770 Unsafe Shutdowns: 0 00:11:34.770 Unrecoverable Media Errors: 0 00:11:34.770 Lifetime Error Log Entries: 0 00:11:34.770 Warning Temperature Time: 0 minutes 00:11:34.770 Critical Temperature Time: 0 minutes 00:11:34.770 00:11:34.770 Number of Queues 00:11:34.770 ================ 00:11:34.770 Number of I/O Submission Queues: 64 00:11:34.770 Number of I/O Completion Queues: 64 00:11:34.770 00:11:34.770 ZNS Specific Controller Data 00:11:34.770 ============================ 00:11:34.770 Zone Append Size Limit: 0 00:11:34.770 00:11:34.770 00:11:34.770 Active Namespaces 00:11:34.770 ================= 00:11:34.770 Namespace ID:1 00:11:34.770 Error Recovery Timeout: Unlimited 00:11:34.770 Command Set Identifier: NVM (00h) 00:11:34.770 Deallocate: Supported 00:11:34.770 Deallocated/Unwritten Error: Supported 00:11:34.770 Deallocated Read Value: All 0x00 00:11:34.770 Deallocate in Write Zeroes: Not Supported 00:11:34.770 Deallocated Guard Field: 0xFFFF 00:11:34.770 Flush: Supported 00:11:34.770 Reservation: Not Supported 00:11:34.770 Namespace Sharing Capabilities: Private 00:11:34.770 Size (in LBAs): 1048576 (4GiB) 00:11:34.770 Capacity (in LBAs): 1048576 (4GiB) 00:11:34.770 Utilization (in LBAs): 1048576 (4GiB) 00:11:34.770 Thin Provisioning: Not Supported 00:11:34.770 Per-NS Atomic Units: No 00:11:34.770 Maximum Single Source Range Length: 128 00:11:34.770 Maximum Copy Length: 128 00:11:34.770 Maximum Source Range Count: 128 00:11:34.770 NGUID/EUI64 Never Reused: No 00:11:34.770 Namespace Write Protected: No 00:11:34.770 Number of LBA Formats: 8 00:11:34.770 Current LBA Format: LBA Format #04 00:11:34.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:34.770 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:34.770 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:34.770 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:34.770 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:34.770 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:34.770 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:34.770 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:34.770 00:11:34.770 NVM Specific Namespace Data 00:11:34.770 =========================== 00:11:34.770 Logical Block Storage Tag Mask: 0 00:11:34.770 Protection Information Capabilities: 00:11:34.770 16b Guard Protection Information Storage Tag Support: No 00:11:34.770 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:34.770 Storage Tag Check Read Support: No 00:11:34.770 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.770 Namespace ID:2 00:11:34.770 Error Recovery Timeout: Unlimited 00:11:34.770 Command Set Identifier: NVM (00h) 00:11:34.770 Deallocate: Supported 00:11:34.770 Deallocated/Unwritten Error: Supported 00:11:34.770 Deallocated Read Value: All 0x00 00:11:34.770 Deallocate in Write Zeroes: Not Supported 00:11:34.770 Deallocated Guard Field: 0xFFFF 00:11:34.770 Flush: Supported 00:11:34.770 Reservation: Not Supported 00:11:34.770 Namespace Sharing Capabilities: Private 00:11:34.770 Size (in LBAs): 1048576 (4GiB) 00:11:34.770 Capacity (in LBAs): 1048576 (4GiB) 00:11:34.770 Utilization (in LBAs): 1048576 (4GiB) 00:11:34.770 Thin Provisioning: Not Supported 00:11:34.770 Per-NS Atomic Units: No 00:11:34.770 Maximum Single Source Range Length: 128 00:11:34.770 Maximum Copy Length: 128 00:11:34.770 Maximum Source Range Count: 128 00:11:34.770 NGUID/EUI64 Never Reused: No 00:11:34.770 Namespace Write Protected: No 00:11:34.770 Number of LBA Formats: 8 00:11:34.770 Current LBA Format: LBA Format #04 00:11:34.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:34.770 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:34.770 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:34.770 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:34.770 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:34.770 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:34.770 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:34.770 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:34.770 00:11:34.770 NVM Specific Namespace Data 00:11:34.770 =========================== 00:11:34.770 Logical Block Storage Tag Mask: 0 00:11:34.770 Protection Information Capabilities: 00:11:34.770 16b Guard Protection Information Storage Tag Support: No 00:11:34.770 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:34.770 Storage Tag Check Read Support: No 00:11:34.770 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Namespace ID:3 00:11:34.771 Error Recovery Timeout: Unlimited 00:11:34.771 Command Set Identifier: NVM (00h) 00:11:34.771 Deallocate: Supported 00:11:34.771 Deallocated/Unwritten Error: Supported 00:11:34.771 Deallocated Read Value: All 0x00 00:11:34.771 Deallocate in Write Zeroes: Not Supported 00:11:34.771 Deallocated Guard Field: 0xFFFF 00:11:34.771 Flush: Supported 00:11:34.771 Reservation: Not Supported 00:11:34.771 Namespace Sharing Capabilities: Private 00:11:34.771 Size (in LBAs): 1048576 (4GiB) 00:11:34.771 Capacity (in LBAs): 1048576 (4GiB) 00:11:34.771 Utilization (in LBAs): 1048576 (4GiB) 00:11:34.771 Thin Provisioning: Not Supported 00:11:34.771 Per-NS Atomic Units: No 00:11:34.771 Maximum Single Source Range Length: 128 00:11:34.771 Maximum Copy Length: 128 00:11:34.771 Maximum Source Range Count: 128 00:11:34.771 NGUID/EUI64 Never Reused: No 00:11:34.771 Namespace Write Protected: No 00:11:34.771 Number of LBA Formats: 8 00:11:34.771 Current LBA Format: LBA Format #04 00:11:34.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:34.771 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:34.771 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:34.771 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:34.771 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:34.771 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:34.771 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:34.771 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:34.771 00:11:34.771 NVM Specific Namespace Data 00:11:34.771 =========================== 00:11:34.771 Logical Block Storage Tag Mask: 0 00:11:34.771 Protection Information Capabilities: 00:11:34.771 16b Guard Protection Information Storage Tag Support: No 00:11:34.771 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:34.771 Storage Tag Check Read Support: No 00:11:34.771 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:34.771 18:18:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:34.771 18:18:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:35.031 ===================================================== 00:11:35.031 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:35.031 ===================================================== 00:11:35.031 Controller Capabilities/Features 00:11:35.031 ================================ 00:11:35.031 Vendor ID: 1b36 00:11:35.031 Subsystem Vendor ID: 1af4 00:11:35.031 Serial Number: 12343 00:11:35.031 Model Number: QEMU NVMe Ctrl 00:11:35.031 Firmware Version: 8.0.0 00:11:35.031 Recommended Arb Burst: 6 00:11:35.031 IEEE OUI Identifier: 00 54 52 00:11:35.031 Multi-path I/O 00:11:35.031 May have multiple subsystem ports: No 00:11:35.031 May have multiple controllers: Yes 00:11:35.031 Associated with SR-IOV VF: No 00:11:35.031 Max Data Transfer Size: 524288 00:11:35.031 Max Number of Namespaces: 256 00:11:35.031 Max Number of I/O Queues: 64 00:11:35.031 NVMe Specification Version (VS): 1.4 00:11:35.031 NVMe Specification Version (Identify): 1.4 00:11:35.031 Maximum Queue Entries: 2048 00:11:35.031 Contiguous Queues Required: Yes 00:11:35.031 Arbitration Mechanisms Supported 00:11:35.031 Weighted Round Robin: Not Supported 00:11:35.031 Vendor Specific: Not Supported 00:11:35.031 Reset Timeout: 7500 ms 00:11:35.031 Doorbell Stride: 4 bytes 00:11:35.031 NVM Subsystem Reset: Not Supported 00:11:35.031 Command Sets Supported 00:11:35.031 NVM Command Set: Supported 00:11:35.031 Boot Partition: Not Supported 00:11:35.031 Memory Page Size Minimum: 4096 bytes 00:11:35.031 Memory Page Size Maximum: 65536 bytes 00:11:35.031 Persistent Memory Region: Not Supported 00:11:35.031 Optional Asynchronous Events Supported 00:11:35.031 Namespace Attribute Notices: Supported 00:11:35.031 Firmware Activation Notices: Not Supported 00:11:35.031 ANA Change Notices: Not Supported 00:11:35.031 PLE Aggregate Log Change Notices: Not Supported 00:11:35.031 LBA Status Info Alert Notices: Not Supported 00:11:35.031 EGE Aggregate Log Change Notices: Not Supported 00:11:35.031 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.031 Zone Descriptor Change Notices: Not Supported 00:11:35.031 Discovery Log Change Notices: Not Supported 00:11:35.031 Controller Attributes 00:11:35.031 128-bit Host Identifier: Not Supported 00:11:35.031 Non-Operational Permissive Mode: Not Supported 00:11:35.031 NVM Sets: Not Supported 00:11:35.031 Read Recovery Levels: Not Supported 00:11:35.031 Endurance Groups: Supported 00:11:35.031 Predictable Latency Mode: Not Supported 00:11:35.031 Traffic Based Keep ALive: Not Supported 00:11:35.031 Namespace Granularity: Not Supported 00:11:35.031 SQ Associations: Not Supported 00:11:35.031 UUID List: Not Supported 00:11:35.031 Multi-Domain Subsystem: Not Supported 00:11:35.031 Fixed Capacity Management: Not Supported 00:11:35.031 Variable Capacity Management: Not Supported 00:11:35.031 Delete Endurance Group: Not Supported 00:11:35.031 Delete NVM Set: Not Supported 00:11:35.031 Extended LBA Formats Supported: Supported 00:11:35.031 Flexible Data Placement Supported: Supported 00:11:35.031 00:11:35.031 Controller Memory Buffer Support 00:11:35.031 ================================ 00:11:35.031 Supported: No 00:11:35.031 00:11:35.031 Persistent Memory Region Support 00:11:35.031 ================================ 00:11:35.031 Supported: No 00:11:35.031 00:11:35.031 Admin Command Set Attributes 00:11:35.031 ============================ 00:11:35.031 Security Send/Receive: Not Supported 00:11:35.031 Format NVM: Supported 00:11:35.031 Firmware Activate/Download: Not Supported 00:11:35.031 Namespace Management: Supported 00:11:35.031 Device Self-Test: Not Supported 00:11:35.031 Directives: Supported 00:11:35.031 NVMe-MI: Not Supported 00:11:35.031 Virtualization Management: Not Supported 00:11:35.031 Doorbell Buffer Config: Supported 00:11:35.031 Get LBA Status Capability: Not Supported 00:11:35.031 Command & Feature Lockdown Capability: Not Supported 00:11:35.031 Abort Command Limit: 4 00:11:35.031 Async Event Request Limit: 4 00:11:35.031 Number of Firmware Slots: N/A 00:11:35.031 Firmware Slot 1 Read-Only: N/A 00:11:35.031 Firmware Activation Without Reset: N/A 00:11:35.031 Multiple Update Detection Support: N/A 00:11:35.031 Firmware Update Granularity: No Information Provided 00:11:35.031 Per-Namespace SMART Log: Yes 00:11:35.031 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.031 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:35.031 Command Effects Log Page: Supported 00:11:35.031 Get Log Page Extended Data: Supported 00:11:35.031 Telemetry Log Pages: Not Supported 00:11:35.031 Persistent Event Log Pages: Not Supported 00:11:35.031 Supported Log Pages Log Page: May Support 00:11:35.031 Commands Supported & Effects Log Page: Not Supported 00:11:35.031 Feature Identifiers & Effects Log Page:May Support 00:11:35.031 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.031 Data Area 4 for Telemetry Log: Not Supported 00:11:35.031 Error Log Page Entries Supported: 1 00:11:35.031 Keep Alive: Not Supported 00:11:35.031 00:11:35.031 NVM Command Set Attributes 00:11:35.031 ========================== 00:11:35.031 Submission Queue Entry Size 00:11:35.031 Max: 64 00:11:35.031 Min: 64 00:11:35.031 Completion Queue Entry Size 00:11:35.031 Max: 16 00:11:35.031 Min: 16 00:11:35.031 Number of Namespaces: 256 00:11:35.031 Compare Command: Supported 00:11:35.031 Write Uncorrectable Command: Not Supported 00:11:35.031 Dataset Management Command: Supported 00:11:35.031 Write Zeroes Command: Supported 00:11:35.031 Set Features Save Field: Supported 00:11:35.031 Reservations: Not Supported 00:11:35.031 Timestamp: Supported 00:11:35.031 Copy: Supported 00:11:35.031 Volatile Write Cache: Present 00:11:35.031 Atomic Write Unit (Normal): 1 00:11:35.031 Atomic Write Unit (PFail): 1 00:11:35.031 Atomic Compare & Write Unit: 1 00:11:35.031 Fused Compare & Write: Not Supported 00:11:35.031 Scatter-Gather List 00:11:35.031 SGL Command Set: Supported 00:11:35.031 SGL Keyed: Not Supported 00:11:35.031 SGL Bit Bucket Descriptor: Not Supported 00:11:35.031 SGL Metadata Pointer: Not Supported 00:11:35.031 Oversized SGL: Not Supported 00:11:35.031 SGL Metadata Address: Not Supported 00:11:35.031 SGL Offset: Not Supported 00:11:35.031 Transport SGL Data Block: Not Supported 00:11:35.031 Replay Protected Memory Block: Not Supported 00:11:35.031 00:11:35.031 Firmware Slot Information 00:11:35.031 ========================= 00:11:35.031 Active slot: 1 00:11:35.031 Slot 1 Firmware Revision: 1.0 00:11:35.031 00:11:35.031 00:11:35.031 Commands Supported and Effects 00:11:35.031 ============================== 00:11:35.031 Admin Commands 00:11:35.031 -------------- 00:11:35.031 Delete I/O Submission Queue (00h): Supported 00:11:35.031 Create I/O Submission Queue (01h): Supported 00:11:35.031 Get Log Page (02h): Supported 00:11:35.031 Delete I/O Completion Queue (04h): Supported 00:11:35.031 Create I/O Completion Queue (05h): Supported 00:11:35.031 Identify (06h): Supported 00:11:35.031 Abort (08h): Supported 00:11:35.031 Set Features (09h): Supported 00:11:35.031 Get Features (0Ah): Supported 00:11:35.031 Asynchronous Event Request (0Ch): Supported 00:11:35.031 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:35.031 Directive Send (19h): Supported 00:11:35.031 Directive Receive (1Ah): Supported 00:11:35.031 Virtualization Management (1Ch): Supported 00:11:35.031 Doorbell Buffer Config (7Ch): Supported 00:11:35.031 Format NVM (80h): Supported LBA-Change 00:11:35.031 I/O Commands 00:11:35.031 ------------ 00:11:35.031 Flush (00h): Supported LBA-Change 00:11:35.031 Write (01h): Supported LBA-Change 00:11:35.031 Read (02h): Supported 00:11:35.031 Compare (05h): Supported 00:11:35.031 Write Zeroes (08h): Supported LBA-Change 00:11:35.031 Dataset Management (09h): Supported LBA-Change 00:11:35.031 Unknown (0Ch): Supported 00:11:35.031 Unknown (12h): Supported 00:11:35.031 Copy (19h): Supported LBA-Change 00:11:35.031 Unknown (1Dh): Supported LBA-Change 00:11:35.031 00:11:35.031 Error Log 00:11:35.031 ========= 00:11:35.031 00:11:35.031 Arbitration 00:11:35.031 =========== 00:11:35.031 Arbitration Burst: no limit 00:11:35.031 00:11:35.031 Power Management 00:11:35.031 ================ 00:11:35.031 Number of Power States: 1 00:11:35.031 Current Power State: Power State #0 00:11:35.032 Power State #0: 00:11:35.032 Max Power: 25.00 W 00:11:35.032 Non-Operational State: Operational 00:11:35.032 Entry Latency: 16 microseconds 00:11:35.032 Exit Latency: 4 microseconds 00:11:35.032 Relative Read Throughput: 0 00:11:35.032 Relative Read Latency: 0 00:11:35.032 Relative Write Throughput: 0 00:11:35.032 Relative Write Latency: 0 00:11:35.032 Idle Power: Not Reported 00:11:35.032 Active Power: Not Reported 00:11:35.032 Non-Operational Permissive Mode: Not Supported 00:11:35.032 00:11:35.032 Health Information 00:11:35.032 ================== 00:11:35.032 Critical Warnings: 00:11:35.032 Available Spare Space: OK 00:11:35.032 Temperature: OK 00:11:35.032 Device Reliability: OK 00:11:35.032 Read Only: No 00:11:35.032 Volatile Memory Backup: OK 00:11:35.032 Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.032 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:35.032 Available Spare: 0% 00:11:35.032 Available Spare Threshold: 0% 00:11:35.032 Life Percentage Used: 0% 00:11:35.032 Data Units Read: 770 00:11:35.032 Data Units Written: 664 00:11:35.032 Host Read Commands: 33110 00:11:35.032 Host Write Commands: 31700 00:11:35.032 Controller Busy Time: 0 minutes 00:11:35.032 Power Cycles: 0 00:11:35.032 Power On Hours: 0 hours 00:11:35.032 Unsafe Shutdowns: 0 00:11:35.032 Unrecoverable Media Errors: 0 00:11:35.032 Lifetime Error Log Entries: 0 00:11:35.032 Warning Temperature Time: 0 minutes 00:11:35.032 Critical Temperature Time: 0 minutes 00:11:35.032 00:11:35.032 Number of Queues 00:11:35.032 ================ 00:11:35.032 Number of I/O Submission Queues: 64 00:11:35.032 Number of I/O Completion Queues: 64 00:11:35.032 00:11:35.032 ZNS Specific Controller Data 00:11:35.032 ============================ 00:11:35.032 Zone Append Size Limit: 0 00:11:35.032 00:11:35.032 00:11:35.032 Active Namespaces 00:11:35.032 ================= 00:11:35.032 Namespace ID:1 00:11:35.032 Error Recovery Timeout: Unlimited 00:11:35.032 Command Set Identifier: NVM (00h) 00:11:35.032 Deallocate: Supported 00:11:35.032 Deallocated/Unwritten Error: Supported 00:11:35.032 Deallocated Read Value: All 0x00 00:11:35.032 Deallocate in Write Zeroes: Not Supported 00:11:35.032 Deallocated Guard Field: 0xFFFF 00:11:35.032 Flush: Supported 00:11:35.032 Reservation: Not Supported 00:11:35.032 Namespace Sharing Capabilities: Multiple Controllers 00:11:35.032 Size (in LBAs): 262144 (1GiB) 00:11:35.032 Capacity (in LBAs): 262144 (1GiB) 00:11:35.032 Utilization (in LBAs): 262144 (1GiB) 00:11:35.032 Thin Provisioning: Not Supported 00:11:35.032 Per-NS Atomic Units: No 00:11:35.032 Maximum Single Source Range Length: 128 00:11:35.032 Maximum Copy Length: 128 00:11:35.032 Maximum Source Range Count: 128 00:11:35.032 NGUID/EUI64 Never Reused: No 00:11:35.032 Namespace Write Protected: No 00:11:35.032 Endurance group ID: 1 00:11:35.032 Number of LBA Formats: 8 00:11:35.032 Current LBA Format: LBA Format #04 00:11:35.032 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.032 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.032 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.032 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.032 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.032 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.032 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.032 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.032 00:11:35.032 Get Feature FDP: 00:11:35.032 ================ 00:11:35.032 Enabled: Yes 00:11:35.032 FDP configuration index: 0 00:11:35.032 00:11:35.032 FDP configurations log page 00:11:35.032 =========================== 00:11:35.032 Number of FDP configurations: 1 00:11:35.032 Version: 0 00:11:35.032 Size: 112 00:11:35.032 FDP Configuration Descriptor: 0 00:11:35.032 Descriptor Size: 96 00:11:35.032 Reclaim Group Identifier format: 2 00:11:35.032 FDP Volatile Write Cache: Not Present 00:11:35.032 FDP Configuration: Valid 00:11:35.032 Vendor Specific Size: 0 00:11:35.032 Number of Reclaim Groups: 2 00:11:35.032 Number of Recalim Unit Handles: 8 00:11:35.032 Max Placement Identifiers: 128 00:11:35.032 Number of Namespaces Suppprted: 256 00:11:35.032 Reclaim unit Nominal Size: 6000000 bytes 00:11:35.032 Estimated Reclaim Unit Time Limit: Not Reported 00:11:35.032 RUH Desc #000: RUH Type: Initially Isolated 00:11:35.032 RUH Desc #001: RUH Type: Initially Isolated 00:11:35.032 RUH Desc #002: RUH Type: Initially Isolated 00:11:35.032 RUH Desc #003: RUH Type: Initially Isolated 00:11:35.032 RUH Desc #004: RUH Type: Initially Isolated 00:11:35.032 RUH Desc #005: RUH Type: Initially Isolated 00:11:35.032 RUH Desc #006: RUH Type: Initially Isolated 00:11:35.032 RUH Desc #007: RUH Type: Initially Isolated 00:11:35.032 00:11:35.032 FDP reclaim unit handle usage log page 00:11:35.032 ====================================== 00:11:35.032 Number of Reclaim Unit Handles: 8 00:11:35.032 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:35.032 RUH Usage Desc #001: RUH Attributes: Unused 00:11:35.032 RUH Usage Desc #002: RUH Attributes: Unused 00:11:35.032 RUH Usage Desc #003: RUH Attributes: Unused 00:11:35.032 RUH Usage Desc #004: RUH Attributes: Unused 00:11:35.032 RUH Usage Desc #005: RUH Attributes: Unused 00:11:35.032 RUH Usage Desc #006: RUH Attributes: Unused 00:11:35.032 RUH Usage Desc #007: RUH Attributes: Unused 00:11:35.032 00:11:35.032 FDP statistics log page 00:11:35.032 ======================= 00:11:35.032 Host bytes with metadata written: 419012608 00:11:35.032 Media bytes with metadata written: 419057664 00:11:35.032 Media bytes erased: 0 00:11:35.032 00:11:35.032 FDP events log page 00:11:35.032 =================== 00:11:35.032 Number of FDP events: 0 00:11:35.032 00:11:35.032 NVM Specific Namespace Data 00:11:35.032 =========================== 00:11:35.032 Logical Block Storage Tag Mask: 0 00:11:35.032 Protection Information Capabilities: 00:11:35.032 16b Guard Protection Information Storage Tag Support: No 00:11:35.032 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.032 Storage Tag Check Read Support: No 00:11:35.032 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.032 00:11:35.032 real 0m1.698s 00:11:35.032 user 0m0.686s 00:11:35.032 sys 0m0.773s 00:11:35.032 18:18:46 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.032 18:18:46 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:35.032 ************************************ 00:11:35.032 END TEST nvme_identify 00:11:35.032 ************************************ 00:11:35.032 18:18:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:35.032 18:18:46 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:35.032 18:18:46 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:35.032 18:18:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.032 18:18:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:35.032 ************************************ 00:11:35.032 START TEST nvme_perf 00:11:35.032 ************************************ 00:11:35.032 18:18:46 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:11:35.032 18:18:46 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:36.412 Initializing NVMe Controllers 00:11:36.412 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:36.412 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:36.412 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:36.412 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:36.412 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:36.412 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:36.412 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:36.412 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:36.412 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:36.412 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:36.412 Initialization complete. Launching workers. 00:11:36.412 ======================================================== 00:11:36.412 Latency(us) 00:11:36.412 Device Information : IOPS MiB/s Average min max 00:11:36.412 PCIE (0000:00:11.0) NSID 1 from core 0: 12599.90 147.66 10179.98 8058.03 42973.36 00:11:36.412 PCIE (0000:00:13.0) NSID 1 from core 0: 12599.90 147.66 10154.79 8042.35 40196.11 00:11:36.412 PCIE (0000:00:10.0) NSID 1 from core 0: 12599.90 147.66 10126.32 7916.46 37487.42 00:11:36.412 PCIE (0000:00:12.0) NSID 1 from core 0: 12599.90 147.66 10100.00 8040.98 34249.33 00:11:36.412 PCIE (0000:00:12.0) NSID 2 from core 0: 12599.90 147.66 10072.54 8013.21 31935.30 00:11:36.412 PCIE (0000:00:12.0) NSID 3 from core 0: 12599.90 147.66 10044.81 8052.51 28868.04 00:11:36.412 ======================================================== 00:11:36.412 Total : 75599.39 885.93 10113.07 7916.46 42973.36 00:11:36.412 00:11:36.412 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:36.412 ================================================================================= 00:11:36.412 1.00000% : 8400.524us 00:11:36.412 10.00000% : 8817.571us 00:11:36.412 25.00000% : 9353.775us 00:11:36.412 50.00000% : 9889.978us 00:11:36.412 75.00000% : 10426.182us 00:11:36.412 90.00000% : 11081.542us 00:11:36.412 95.00000% : 11677.324us 00:11:36.412 98.00000% : 12570.996us 00:11:36.412 99.00000% : 32887.156us 00:11:36.412 99.50000% : 40751.476us 00:11:36.412 99.90000% : 42657.978us 00:11:36.412 99.99000% : 43134.604us 00:11:36.412 99.99900% : 43134.604us 00:11:36.412 99.99990% : 43134.604us 00:11:36.412 99.99999% : 43134.604us 00:11:36.412 00:11:36.412 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:36.412 ================================================================================= 00:11:36.412 1.00000% : 8340.945us 00:11:36.412 10.00000% : 8817.571us 00:11:36.412 25.00000% : 9353.775us 00:11:36.413 50.00000% : 9889.978us 00:11:36.413 75.00000% : 10426.182us 00:11:36.413 90.00000% : 11081.542us 00:11:36.413 95.00000% : 11736.902us 00:11:36.413 98.00000% : 12511.418us 00:11:36.413 99.00000% : 30742.342us 00:11:36.413 99.50000% : 37891.724us 00:11:36.413 99.90000% : 39798.225us 00:11:36.413 99.99000% : 40274.851us 00:11:36.413 99.99900% : 40274.851us 00:11:36.413 99.99990% : 40274.851us 00:11:36.413 99.99999% : 40274.851us 00:11:36.413 00:11:36.413 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:36.413 ================================================================================= 00:11:36.413 1.00000% : 8221.789us 00:11:36.413 10.00000% : 8817.571us 00:11:36.413 25.00000% : 9294.196us 00:11:36.413 50.00000% : 9830.400us 00:11:36.413 75.00000% : 10426.182us 00:11:36.413 90.00000% : 11081.542us 00:11:36.413 95.00000% : 11796.480us 00:11:36.413 98.00000% : 12749.731us 00:11:36.413 99.00000% : 27644.276us 00:11:36.413 99.50000% : 34793.658us 00:11:36.413 99.90000% : 37176.785us 00:11:36.413 99.99000% : 37653.411us 00:11:36.413 99.99900% : 37653.411us 00:11:36.413 99.99990% : 37653.411us 00:11:36.413 99.99999% : 37653.411us 00:11:36.413 00:11:36.413 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:36.413 ================================================================================= 00:11:36.413 1.00000% : 8340.945us 00:11:36.413 10.00000% : 8817.571us 00:11:36.413 25.00000% : 9353.775us 00:11:36.413 50.00000% : 9889.978us 00:11:36.413 75.00000% : 10366.604us 00:11:36.413 90.00000% : 11081.542us 00:11:36.413 95.00000% : 11796.480us 00:11:36.413 98.00000% : 13166.778us 00:11:36.413 99.00000% : 24903.680us 00:11:36.413 99.50000% : 31933.905us 00:11:36.413 99.90000% : 33840.407us 00:11:36.413 99.99000% : 34317.033us 00:11:36.413 99.99900% : 34317.033us 00:11:36.413 99.99990% : 34317.033us 00:11:36.413 99.99999% : 34317.033us 00:11:36.413 00:11:36.413 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:36.413 ================================================================================= 00:11:36.413 1.00000% : 8340.945us 00:11:36.413 10.00000% : 8817.571us 00:11:36.413 25.00000% : 9353.775us 00:11:36.413 50.00000% : 9830.400us 00:11:36.413 75.00000% : 10366.604us 00:11:36.413 90.00000% : 11081.542us 00:11:36.413 95.00000% : 11736.902us 00:11:36.413 98.00000% : 13345.513us 00:11:36.413 99.00000% : 22282.240us 00:11:36.413 99.50000% : 29431.622us 00:11:36.413 99.90000% : 31695.593us 00:11:36.413 99.99000% : 31933.905us 00:11:36.413 99.99900% : 32172.218us 00:11:36.413 99.99990% : 32172.218us 00:11:36.413 99.99999% : 32172.218us 00:11:36.413 00:11:36.413 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:36.413 ================================================================================= 00:11:36.413 1.00000% : 8340.945us 00:11:36.413 10.00000% : 8817.571us 00:11:36.413 25.00000% : 9353.775us 00:11:36.413 50.00000% : 9889.978us 00:11:36.413 75.00000% : 10426.182us 00:11:36.413 90.00000% : 11081.542us 00:11:36.413 95.00000% : 11677.324us 00:11:36.413 98.00000% : 13405.091us 00:11:36.413 99.00000% : 19422.487us 00:11:36.413 99.50000% : 26571.869us 00:11:36.413 99.90000% : 28478.371us 00:11:36.413 99.99000% : 28835.840us 00:11:36.413 99.99900% : 28954.996us 00:11:36.413 99.99990% : 28954.996us 00:11:36.413 99.99999% : 28954.996us 00:11:36.413 00:11:36.413 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:36.413 ============================================================================== 00:11:36.413 Range in us Cumulative IO count 00:11:36.413 8043.055 - 8102.633: 0.0238% ( 3) 00:11:36.413 8102.633 - 8162.211: 0.0952% ( 9) 00:11:36.413 8162.211 - 8221.789: 0.2459% ( 19) 00:11:36.413 8221.789 - 8281.367: 0.5314% ( 36) 00:11:36.413 8281.367 - 8340.945: 0.9756% ( 56) 00:11:36.413 8340.945 - 8400.524: 1.6497% ( 85) 00:11:36.413 8400.524 - 8460.102: 2.6332% ( 124) 00:11:36.413 8460.102 - 8519.680: 3.7357% ( 139) 00:11:36.413 8519.680 - 8579.258: 4.9651% ( 155) 00:11:36.413 8579.258 - 8638.836: 6.1707% ( 152) 00:11:36.413 8638.836 - 8698.415: 7.5111% ( 169) 00:11:36.413 8698.415 - 8757.993: 8.8912% ( 174) 00:11:36.413 8757.993 - 8817.571: 10.3506% ( 184) 00:11:36.413 8817.571 - 8877.149: 11.8417% ( 188) 00:11:36.413 8877.149 - 8936.727: 13.2456% ( 177) 00:11:36.413 8936.727 - 8996.305: 14.8319% ( 200) 00:11:36.413 8996.305 - 9055.884: 16.4499% ( 204) 00:11:36.413 9055.884 - 9115.462: 18.1551% ( 215) 00:11:36.413 9115.462 - 9175.040: 20.1063% ( 246) 00:11:36.413 9175.040 - 9234.618: 22.3192% ( 279) 00:11:36.413 9234.618 - 9294.196: 24.8096% ( 314) 00:11:36.413 9294.196 - 9353.775: 27.4508% ( 333) 00:11:36.413 9353.775 - 9413.353: 30.1634% ( 342) 00:11:36.413 9413.353 - 9472.931: 32.8839% ( 343) 00:11:36.413 9472.931 - 9532.509: 35.7392% ( 360) 00:11:36.413 9532.509 - 9592.087: 38.5073% ( 349) 00:11:36.413 9592.087 - 9651.665: 41.1088% ( 328) 00:11:36.413 9651.665 - 9711.244: 43.9879% ( 363) 00:11:36.413 9711.244 - 9770.822: 46.8036% ( 355) 00:11:36.413 9770.822 - 9830.400: 49.7224% ( 368) 00:11:36.413 9830.400 - 9889.978: 52.5777% ( 360) 00:11:36.413 9889.978 - 9949.556: 55.4806% ( 366) 00:11:36.413 9949.556 - 10009.135: 58.3598% ( 363) 00:11:36.413 10009.135 - 10068.713: 61.0882% ( 344) 00:11:36.413 10068.713 - 10128.291: 63.8404% ( 347) 00:11:36.413 10128.291 - 10187.869: 66.6006% ( 348) 00:11:36.413 10187.869 - 10247.447: 69.2893% ( 339) 00:11:36.413 10247.447 - 10307.025: 71.8274% ( 320) 00:11:36.413 10307.025 - 10366.604: 74.3100% ( 313) 00:11:36.413 10366.604 - 10426.182: 76.5942% ( 288) 00:11:36.413 10426.182 - 10485.760: 78.5850% ( 251) 00:11:36.413 10485.760 - 10545.338: 80.4648% ( 237) 00:11:36.413 10545.338 - 10604.916: 82.2256% ( 222) 00:11:36.413 10604.916 - 10664.495: 83.6612% ( 181) 00:11:36.413 10664.495 - 10724.073: 85.0571% ( 176) 00:11:36.413 10724.073 - 10783.651: 86.2944% ( 156) 00:11:36.413 10783.651 - 10843.229: 87.3255% ( 130) 00:11:36.413 10843.229 - 10902.807: 88.2297% ( 114) 00:11:36.413 10902.807 - 10962.385: 89.0308% ( 101) 00:11:36.413 10962.385 - 11021.964: 89.7367% ( 89) 00:11:36.413 11021.964 - 11081.542: 90.3712% ( 80) 00:11:36.413 11081.542 - 11141.120: 90.9026% ( 67) 00:11:36.413 11141.120 - 11200.698: 91.3944% ( 62) 00:11:36.413 11200.698 - 11260.276: 91.8861% ( 62) 00:11:36.413 11260.276 - 11319.855: 92.3858% ( 63) 00:11:36.413 11319.855 - 11379.433: 92.8855% ( 63) 00:11:36.413 11379.433 - 11439.011: 93.3852% ( 63) 00:11:36.413 11439.011 - 11498.589: 93.9007% ( 65) 00:11:36.413 11498.589 - 11558.167: 94.3845% ( 61) 00:11:36.413 11558.167 - 11617.745: 94.8842% ( 63) 00:11:36.413 11617.745 - 11677.324: 95.3601% ( 60) 00:11:36.413 11677.324 - 11736.902: 95.7963% ( 55) 00:11:36.413 11736.902 - 11796.480: 96.2326% ( 55) 00:11:36.413 11796.480 - 11856.058: 96.6053% ( 47) 00:11:36.413 11856.058 - 11915.636: 96.9464% ( 43) 00:11:36.413 11915.636 - 11975.215: 97.2081% ( 33) 00:11:36.413 11975.215 - 12034.793: 97.3747% ( 21) 00:11:36.413 12034.793 - 12094.371: 97.4937% ( 15) 00:11:36.413 12094.371 - 12153.949: 97.5968% ( 13) 00:11:36.413 12153.949 - 12213.527: 97.6761% ( 10) 00:11:36.413 12213.527 - 12273.105: 97.7554% ( 10) 00:11:36.413 12273.105 - 12332.684: 97.8188% ( 8) 00:11:36.413 12332.684 - 12392.262: 97.8744% ( 7) 00:11:36.413 12392.262 - 12451.840: 97.9378% ( 8) 00:11:36.413 12451.840 - 12511.418: 97.9933% ( 7) 00:11:36.413 12511.418 - 12570.996: 98.0251% ( 4) 00:11:36.413 12570.996 - 12630.575: 98.0330% ( 1) 00:11:36.413 12630.575 - 12690.153: 98.0489% ( 2) 00:11:36.413 12690.153 - 12749.731: 98.0647% ( 2) 00:11:36.413 12749.731 - 12809.309: 98.0806% ( 2) 00:11:36.413 12809.309 - 12868.887: 98.0964% ( 2) 00:11:36.413 12868.887 - 12928.465: 98.1123% ( 2) 00:11:36.413 12928.465 - 12988.044: 98.1282% ( 2) 00:11:36.413 12988.044 - 13047.622: 98.1440% ( 2) 00:11:36.413 13047.622 - 13107.200: 98.1599% ( 2) 00:11:36.413 13107.200 - 13166.778: 98.1758% ( 2) 00:11:36.413 13166.778 - 13226.356: 98.1837% ( 1) 00:11:36.413 13226.356 - 13285.935: 98.1996% ( 2) 00:11:36.413 13345.513 - 13405.091: 98.2154% ( 2) 00:11:36.413 13405.091 - 13464.669: 98.2313% ( 2) 00:11:36.413 13464.669 - 13524.247: 98.2392% ( 1) 00:11:36.413 13524.247 - 13583.825: 98.2551% ( 2) 00:11:36.413 13583.825 - 13643.404: 98.2709% ( 2) 00:11:36.413 13643.404 - 13702.982: 98.2868% ( 2) 00:11:36.413 13702.982 - 13762.560: 98.3106% ( 3) 00:11:36.413 13762.560 - 13822.138: 98.3265% ( 2) 00:11:36.413 13822.138 - 13881.716: 98.3423% ( 2) 00:11:36.413 13881.716 - 13941.295: 98.3820% ( 5) 00:11:36.413 13941.295 - 14000.873: 98.4137% ( 4) 00:11:36.413 14000.873 - 14060.451: 98.4613% ( 6) 00:11:36.413 14060.451 - 14120.029: 98.5010% ( 5) 00:11:36.413 14120.029 - 14179.607: 98.5327% ( 4) 00:11:36.413 14179.607 - 14239.185: 98.5723% ( 5) 00:11:36.413 14239.185 - 14298.764: 98.6199% ( 6) 00:11:36.413 14298.764 - 14358.342: 98.6596% ( 5) 00:11:36.413 14358.342 - 14417.920: 98.6834% ( 3) 00:11:36.413 14417.920 - 14477.498: 98.6992% ( 2) 00:11:36.413 14477.498 - 14537.076: 98.7230% ( 3) 00:11:36.413 14537.076 - 14596.655: 98.7468% ( 3) 00:11:36.413 14596.655 - 14656.233: 98.7627% ( 2) 00:11:36.413 14656.233 - 14715.811: 98.7865% ( 3) 00:11:36.413 14715.811 - 14775.389: 98.8103% ( 3) 00:11:36.413 14775.389 - 14834.967: 98.8261% ( 2) 00:11:36.413 14834.967 - 14894.545: 98.8499% ( 3) 00:11:36.413 14894.545 - 14954.124: 98.8658% ( 2) 00:11:36.413 14954.124 - 15013.702: 98.8896% ( 3) 00:11:36.414 15013.702 - 15073.280: 98.9134% ( 3) 00:11:36.414 15073.280 - 15132.858: 98.9293% ( 2) 00:11:36.414 15132.858 - 15192.436: 98.9530% ( 3) 00:11:36.414 15192.436 - 15252.015: 98.9768% ( 3) 00:11:36.414 15252.015 - 15371.171: 98.9848% ( 1) 00:11:36.414 32648.844 - 32887.156: 99.0006% ( 2) 00:11:36.414 32887.156 - 33125.469: 99.0482% ( 6) 00:11:36.414 33125.469 - 33363.782: 99.0958% ( 6) 00:11:36.414 33363.782 - 33602.095: 99.1434% ( 6) 00:11:36.414 33602.095 - 33840.407: 99.1989% ( 7) 00:11:36.414 33840.407 - 34078.720: 99.2465% ( 6) 00:11:36.414 34078.720 - 34317.033: 99.2941% ( 6) 00:11:36.414 34317.033 - 34555.345: 99.3417% ( 6) 00:11:36.414 34555.345 - 34793.658: 99.3893% ( 6) 00:11:36.414 34793.658 - 35031.971: 99.4448% ( 7) 00:11:36.414 35031.971 - 35270.284: 99.4924% ( 6) 00:11:36.414 40513.164 - 40751.476: 99.5320% ( 5) 00:11:36.414 40751.476 - 40989.789: 99.5796% ( 6) 00:11:36.414 40989.789 - 41228.102: 99.6272% ( 6) 00:11:36.414 41228.102 - 41466.415: 99.6748% ( 6) 00:11:36.414 41466.415 - 41704.727: 99.7303% ( 7) 00:11:36.414 41704.727 - 41943.040: 99.7779% ( 6) 00:11:36.414 41943.040 - 42181.353: 99.8255% ( 6) 00:11:36.414 42181.353 - 42419.665: 99.8810% ( 7) 00:11:36.414 42419.665 - 42657.978: 99.9286% ( 6) 00:11:36.414 42657.978 - 42896.291: 99.9841% ( 7) 00:11:36.414 42896.291 - 43134.604: 100.0000% ( 2) 00:11:36.414 00:11:36.414 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:36.414 ============================================================================== 00:11:36.414 Range in us Cumulative IO count 00:11:36.414 7983.476 - 8043.055: 0.0079% ( 1) 00:11:36.414 8043.055 - 8102.633: 0.0476% ( 5) 00:11:36.414 8102.633 - 8162.211: 0.0952% ( 6) 00:11:36.414 8162.211 - 8221.789: 0.2379% ( 18) 00:11:36.414 8221.789 - 8281.367: 0.5552% ( 40) 00:11:36.414 8281.367 - 8340.945: 1.0311% ( 60) 00:11:36.414 8340.945 - 8400.524: 1.7608% ( 92) 00:11:36.414 8400.524 - 8460.102: 2.7602% ( 126) 00:11:36.414 8460.102 - 8519.680: 3.8071% ( 132) 00:11:36.414 8519.680 - 8579.258: 5.0206% ( 153) 00:11:36.414 8579.258 - 8638.836: 6.2421% ( 154) 00:11:36.414 8638.836 - 8698.415: 7.5508% ( 165) 00:11:36.414 8698.415 - 8757.993: 8.9308% ( 174) 00:11:36.414 8757.993 - 8817.571: 10.2951% ( 172) 00:11:36.414 8817.571 - 8877.149: 11.7782% ( 187) 00:11:36.414 8877.149 - 8936.727: 13.2694% ( 188) 00:11:36.414 8936.727 - 8996.305: 14.8556% ( 200) 00:11:36.414 8996.305 - 9055.884: 16.4737% ( 204) 00:11:36.414 9055.884 - 9115.462: 18.1472% ( 211) 00:11:36.414 9115.462 - 9175.040: 20.0587% ( 241) 00:11:36.414 9175.040 - 9234.618: 22.2081% ( 271) 00:11:36.414 9234.618 - 9294.196: 24.6589% ( 309) 00:11:36.414 9294.196 - 9353.775: 27.3001% ( 333) 00:11:36.414 9353.775 - 9413.353: 30.1079% ( 354) 00:11:36.414 9413.353 - 9472.931: 32.8601% ( 347) 00:11:36.414 9472.931 - 9532.509: 35.6044% ( 346) 00:11:36.414 9532.509 - 9592.087: 38.2297% ( 331) 00:11:36.414 9592.087 - 9651.665: 40.9264% ( 340) 00:11:36.414 9651.665 - 9711.244: 43.5596% ( 332) 00:11:36.414 9711.244 - 9770.822: 46.3436% ( 351) 00:11:36.414 9770.822 - 9830.400: 49.2544% ( 367) 00:11:36.414 9830.400 - 9889.978: 52.2049% ( 372) 00:11:36.414 9889.978 - 9949.556: 55.2189% ( 380) 00:11:36.414 9949.556 - 10009.135: 58.1377% ( 368) 00:11:36.414 10009.135 - 10068.713: 61.1834% ( 384) 00:11:36.414 10068.713 - 10128.291: 64.0387% ( 360) 00:11:36.414 10128.291 - 10187.869: 66.9337% ( 365) 00:11:36.414 10187.869 - 10247.447: 69.7335% ( 353) 00:11:36.414 10247.447 - 10307.025: 72.4461% ( 342) 00:11:36.414 10307.025 - 10366.604: 74.8810% ( 307) 00:11:36.414 10366.604 - 10426.182: 77.1574% ( 287) 00:11:36.414 10426.182 - 10485.760: 79.2116% ( 259) 00:11:36.414 10485.760 - 10545.338: 81.0359% ( 230) 00:11:36.414 10545.338 - 10604.916: 82.5746% ( 194) 00:11:36.414 10604.916 - 10664.495: 84.0260% ( 183) 00:11:36.414 10664.495 - 10724.073: 85.3188% ( 163) 00:11:36.414 10724.073 - 10783.651: 86.5244% ( 152) 00:11:36.414 10783.651 - 10843.229: 87.4762% ( 120) 00:11:36.414 10843.229 - 10902.807: 88.3804% ( 114) 00:11:36.414 10902.807 - 10962.385: 89.1101% ( 92) 00:11:36.414 10962.385 - 11021.964: 89.7605% ( 82) 00:11:36.414 11021.964 - 11081.542: 90.4029% ( 81) 00:11:36.414 11081.542 - 11141.120: 90.9185% ( 65) 00:11:36.414 11141.120 - 11200.698: 91.4023% ( 61) 00:11:36.414 11200.698 - 11260.276: 91.8782% ( 60) 00:11:36.414 11260.276 - 11319.855: 92.3699% ( 62) 00:11:36.414 11319.855 - 11379.433: 92.8299% ( 58) 00:11:36.414 11379.433 - 11439.011: 93.2662% ( 55) 00:11:36.414 11439.011 - 11498.589: 93.7103% ( 56) 00:11:36.414 11498.589 - 11558.167: 94.1228% ( 52) 00:11:36.414 11558.167 - 11617.745: 94.5431% ( 53) 00:11:36.414 11617.745 - 11677.324: 94.9952% ( 57) 00:11:36.414 11677.324 - 11736.902: 95.3918% ( 50) 00:11:36.414 11736.902 - 11796.480: 95.8201% ( 54) 00:11:36.414 11796.480 - 11856.058: 96.1453% ( 41) 00:11:36.414 11856.058 - 11915.636: 96.4467% ( 38) 00:11:36.414 11915.636 - 11975.215: 96.7322% ( 36) 00:11:36.414 11975.215 - 12034.793: 96.9622% ( 29) 00:11:36.414 12034.793 - 12094.371: 97.1367% ( 22) 00:11:36.414 12094.371 - 12153.949: 97.3192% ( 23) 00:11:36.414 12153.949 - 12213.527: 97.4778% ( 20) 00:11:36.414 12213.527 - 12273.105: 97.6285% ( 19) 00:11:36.414 12273.105 - 12332.684: 97.7554% ( 16) 00:11:36.414 12332.684 - 12392.262: 97.8664% ( 14) 00:11:36.414 12392.262 - 12451.840: 97.9775% ( 14) 00:11:36.414 12451.840 - 12511.418: 98.0489% ( 9) 00:11:36.414 12511.418 - 12570.996: 98.0806% ( 4) 00:11:36.414 12570.996 - 12630.575: 98.1123% ( 4) 00:11:36.414 12630.575 - 12690.153: 98.1440% ( 4) 00:11:36.414 12690.153 - 12749.731: 98.1758% ( 4) 00:11:36.414 12749.731 - 12809.309: 98.2075% ( 4) 00:11:36.414 12809.309 - 12868.887: 98.2234% ( 2) 00:11:36.414 12868.887 - 12928.465: 98.2471% ( 3) 00:11:36.414 12928.465 - 12988.044: 98.2789% ( 4) 00:11:36.414 12988.044 - 13047.622: 98.3027% ( 3) 00:11:36.414 13047.622 - 13107.200: 98.3185% ( 2) 00:11:36.414 13107.200 - 13166.778: 98.3265% ( 1) 00:11:36.414 13166.778 - 13226.356: 98.3423% ( 2) 00:11:36.414 13226.356 - 13285.935: 98.3503% ( 1) 00:11:36.414 13285.935 - 13345.513: 98.3661% ( 2) 00:11:36.414 13345.513 - 13405.091: 98.3740% ( 1) 00:11:36.414 13405.091 - 13464.669: 98.3899% ( 2) 00:11:36.414 13464.669 - 13524.247: 98.4058% ( 2) 00:11:36.414 13524.247 - 13583.825: 98.4216% ( 2) 00:11:36.414 13583.825 - 13643.404: 98.4375% ( 2) 00:11:36.414 13643.404 - 13702.982: 98.4454% ( 1) 00:11:36.414 13702.982 - 13762.560: 98.4613% ( 2) 00:11:36.414 13762.560 - 13822.138: 98.4772% ( 2) 00:11:36.414 14060.451 - 14120.029: 98.4930% ( 2) 00:11:36.414 14120.029 - 14179.607: 98.5168% ( 3) 00:11:36.414 14179.607 - 14239.185: 98.5406% ( 3) 00:11:36.414 14239.185 - 14298.764: 98.5644% ( 3) 00:11:36.414 14298.764 - 14358.342: 98.5803% ( 2) 00:11:36.414 14358.342 - 14417.920: 98.6041% ( 3) 00:11:36.414 14417.920 - 14477.498: 98.6279% ( 3) 00:11:36.414 14477.498 - 14537.076: 98.6516% ( 3) 00:11:36.414 14537.076 - 14596.655: 98.6754% ( 3) 00:11:36.414 14596.655 - 14656.233: 98.6992% ( 3) 00:11:36.414 14656.233 - 14715.811: 98.7151% ( 2) 00:11:36.414 14715.811 - 14775.389: 98.7389% ( 3) 00:11:36.414 14775.389 - 14834.967: 98.7627% ( 3) 00:11:36.414 14834.967 - 14894.545: 98.7865% ( 3) 00:11:36.414 14894.545 - 14954.124: 98.8023% ( 2) 00:11:36.414 14954.124 - 15013.702: 98.8261% ( 3) 00:11:36.414 15013.702 - 15073.280: 98.8499% ( 3) 00:11:36.414 15073.280 - 15132.858: 98.8737% ( 3) 00:11:36.414 15132.858 - 15192.436: 98.8975% ( 3) 00:11:36.414 15192.436 - 15252.015: 98.9134% ( 2) 00:11:36.414 15252.015 - 15371.171: 98.9530% ( 5) 00:11:36.414 15371.171 - 15490.327: 98.9848% ( 4) 00:11:36.414 30504.029 - 30742.342: 99.0006% ( 2) 00:11:36.414 30742.342 - 30980.655: 99.0403% ( 5) 00:11:36.414 30980.655 - 31218.967: 99.0958% ( 7) 00:11:36.414 31218.967 - 31457.280: 99.1434% ( 6) 00:11:36.414 31457.280 - 31695.593: 99.1910% ( 6) 00:11:36.414 31695.593 - 31933.905: 99.2386% ( 6) 00:11:36.414 31933.905 - 32172.218: 99.2862% ( 6) 00:11:36.414 32172.218 - 32410.531: 99.3338% ( 6) 00:11:36.414 32410.531 - 32648.844: 99.3813% ( 6) 00:11:36.414 32648.844 - 32887.156: 99.4289% ( 6) 00:11:36.414 32887.156 - 33125.469: 99.4765% ( 6) 00:11:36.414 33125.469 - 33363.782: 99.4924% ( 2) 00:11:36.414 37653.411 - 37891.724: 99.5241% ( 4) 00:11:36.414 37891.724 - 38130.036: 99.5717% ( 6) 00:11:36.414 38130.036 - 38368.349: 99.6193% ( 6) 00:11:36.414 38368.349 - 38606.662: 99.6669% ( 6) 00:11:36.415 38606.662 - 38844.975: 99.7145% ( 6) 00:11:36.415 38844.975 - 39083.287: 99.7621% ( 6) 00:11:36.415 39083.287 - 39321.600: 99.8176% ( 7) 00:11:36.415 39321.600 - 39559.913: 99.8652% ( 6) 00:11:36.415 39559.913 - 39798.225: 99.9207% ( 7) 00:11:36.415 39798.225 - 40036.538: 99.9683% ( 6) 00:11:36.415 40036.538 - 40274.851: 100.0000% ( 4) 00:11:36.415 00:11:36.415 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:36.415 ============================================================================== 00:11:36.415 Range in us Cumulative IO count 00:11:36.415 7864.320 - 7923.898: 0.0079% ( 1) 00:11:36.415 7923.898 - 7983.476: 0.0317% ( 3) 00:11:36.415 7983.476 - 8043.055: 0.0952% ( 8) 00:11:36.415 8043.055 - 8102.633: 0.1904% ( 12) 00:11:36.415 8102.633 - 8162.211: 0.5235% ( 42) 00:11:36.415 8162.211 - 8221.789: 1.0232% ( 63) 00:11:36.415 8221.789 - 8281.367: 1.6973% ( 85) 00:11:36.415 8281.367 - 8340.945: 2.4350% ( 93) 00:11:36.415 8340.945 - 8400.524: 3.3629% ( 117) 00:11:36.415 8400.524 - 8460.102: 4.3702% ( 127) 00:11:36.415 8460.102 - 8519.680: 5.4331% ( 134) 00:11:36.415 8519.680 - 8579.258: 6.5197% ( 137) 00:11:36.415 8579.258 - 8638.836: 7.5904% ( 135) 00:11:36.415 8638.836 - 8698.415: 8.7881% ( 151) 00:11:36.415 8698.415 - 8757.993: 9.9937% ( 152) 00:11:36.415 8757.993 - 8817.571: 11.1754% ( 149) 00:11:36.415 8817.571 - 8877.149: 12.4365% ( 159) 00:11:36.415 8877.149 - 8936.727: 13.7135% ( 161) 00:11:36.415 8936.727 - 8996.305: 15.2443% ( 193) 00:11:36.415 8996.305 - 9055.884: 16.8782% ( 206) 00:11:36.415 9055.884 - 9115.462: 18.6628% ( 225) 00:11:36.415 9115.462 - 9175.040: 20.7091% ( 258) 00:11:36.415 9175.040 - 9234.618: 23.1202% ( 304) 00:11:36.415 9234.618 - 9294.196: 25.6107% ( 314) 00:11:36.415 9294.196 - 9353.775: 28.1012% ( 314) 00:11:36.415 9353.775 - 9413.353: 30.8772% ( 350) 00:11:36.415 9413.353 - 9472.931: 33.6215% ( 346) 00:11:36.415 9472.931 - 9532.509: 36.5641% ( 371) 00:11:36.415 9532.509 - 9592.087: 39.6415% ( 388) 00:11:36.415 9592.087 - 9651.665: 42.4175% ( 350) 00:11:36.415 9651.665 - 9711.244: 45.1618% ( 346) 00:11:36.415 9711.244 - 9770.822: 48.0171% ( 360) 00:11:36.415 9770.822 - 9830.400: 50.5393% ( 318) 00:11:36.415 9830.400 - 9889.978: 53.2043% ( 336) 00:11:36.415 9889.978 - 9949.556: 55.6551% ( 309) 00:11:36.415 9949.556 - 10009.135: 58.2884% ( 332) 00:11:36.415 10009.135 - 10068.713: 60.8027% ( 317) 00:11:36.415 10068.713 - 10128.291: 63.2218% ( 305) 00:11:36.415 10128.291 - 10187.869: 65.7598% ( 320) 00:11:36.415 10187.869 - 10247.447: 68.1869% ( 306) 00:11:36.415 10247.447 - 10307.025: 70.5980% ( 304) 00:11:36.415 10307.025 - 10366.604: 73.0647% ( 311) 00:11:36.415 10366.604 - 10426.182: 75.4362% ( 299) 00:11:36.415 10426.182 - 10485.760: 77.5222% ( 263) 00:11:36.415 10485.760 - 10545.338: 79.7113% ( 276) 00:11:36.415 10545.338 - 10604.916: 81.5514% ( 232) 00:11:36.415 10604.916 - 10664.495: 83.1377% ( 200) 00:11:36.415 10664.495 - 10724.073: 84.4305% ( 163) 00:11:36.415 10724.073 - 10783.651: 85.7313% ( 164) 00:11:36.415 10783.651 - 10843.229: 86.8179% ( 137) 00:11:36.415 10843.229 - 10902.807: 87.9362% ( 141) 00:11:36.415 10902.807 - 10962.385: 88.7770% ( 106) 00:11:36.415 10962.385 - 11021.964: 89.4908% ( 90) 00:11:36.415 11021.964 - 11081.542: 90.1332% ( 81) 00:11:36.415 11081.542 - 11141.120: 90.6567% ( 66) 00:11:36.415 11141.120 - 11200.698: 91.2119% ( 70) 00:11:36.415 11200.698 - 11260.276: 91.6640% ( 57) 00:11:36.415 11260.276 - 11319.855: 92.0685% ( 51) 00:11:36.415 11319.855 - 11379.433: 92.5048% ( 55) 00:11:36.415 11379.433 - 11439.011: 92.9331% ( 54) 00:11:36.415 11439.011 - 11498.589: 93.2979% ( 46) 00:11:36.415 11498.589 - 11558.167: 93.7421% ( 56) 00:11:36.415 11558.167 - 11617.745: 94.1148% ( 47) 00:11:36.415 11617.745 - 11677.324: 94.4083% ( 37) 00:11:36.415 11677.324 - 11736.902: 94.7573% ( 44) 00:11:36.415 11736.902 - 11796.480: 95.1063% ( 44) 00:11:36.415 11796.480 - 11856.058: 95.4791% ( 47) 00:11:36.415 11856.058 - 11915.636: 95.7963% ( 40) 00:11:36.415 11915.636 - 11975.215: 96.0819% ( 36) 00:11:36.415 11975.215 - 12034.793: 96.3674% ( 36) 00:11:36.415 12034.793 - 12094.371: 96.6133% ( 31) 00:11:36.415 12094.371 - 12153.949: 96.8115% ( 25) 00:11:36.415 12153.949 - 12213.527: 96.9860% ( 22) 00:11:36.415 12213.527 - 12273.105: 97.1447% ( 20) 00:11:36.415 12273.105 - 12332.684: 97.3192% ( 22) 00:11:36.415 12332.684 - 12392.262: 97.4540% ( 17) 00:11:36.415 12392.262 - 12451.840: 97.5968% ( 18) 00:11:36.415 12451.840 - 12511.418: 97.7237% ( 16) 00:11:36.415 12511.418 - 12570.996: 97.8030% ( 10) 00:11:36.415 12570.996 - 12630.575: 97.8902% ( 11) 00:11:36.415 12630.575 - 12690.153: 97.9537% ( 8) 00:11:36.415 12690.153 - 12749.731: 98.0092% ( 7) 00:11:36.415 12749.731 - 12809.309: 98.0489% ( 5) 00:11:36.415 12809.309 - 12868.887: 98.0806% ( 4) 00:11:36.415 12868.887 - 12928.465: 98.1361% ( 7) 00:11:36.415 12928.465 - 12988.044: 98.1758% ( 5) 00:11:36.415 12988.044 - 13047.622: 98.1996% ( 3) 00:11:36.415 13047.622 - 13107.200: 98.2154% ( 2) 00:11:36.415 13107.200 - 13166.778: 98.2313% ( 2) 00:11:36.415 13166.778 - 13226.356: 98.2789% ( 6) 00:11:36.415 13226.356 - 13285.935: 98.2868% ( 1) 00:11:36.415 13285.935 - 13345.513: 98.3106% ( 3) 00:11:36.415 13345.513 - 13405.091: 98.3185% ( 1) 00:11:36.415 13405.091 - 13464.669: 98.3265% ( 1) 00:11:36.415 13464.669 - 13524.247: 98.3423% ( 2) 00:11:36.415 13524.247 - 13583.825: 98.3582% ( 2) 00:11:36.415 13583.825 - 13643.404: 98.3661% ( 1) 00:11:36.415 13643.404 - 13702.982: 98.3899% ( 3) 00:11:36.415 13702.982 - 13762.560: 98.3978% ( 1) 00:11:36.415 13762.560 - 13822.138: 98.4137% ( 2) 00:11:36.415 13822.138 - 13881.716: 98.4216% ( 1) 00:11:36.415 13881.716 - 13941.295: 98.4296% ( 1) 00:11:36.415 13941.295 - 14000.873: 98.4534% ( 3) 00:11:36.415 14000.873 - 14060.451: 98.4613% ( 1) 00:11:36.415 14060.451 - 14120.029: 98.4772% ( 2) 00:11:36.415 14179.607 - 14239.185: 98.4851% ( 1) 00:11:36.415 14239.185 - 14298.764: 98.5089% ( 3) 00:11:36.415 14298.764 - 14358.342: 98.5247% ( 2) 00:11:36.415 14358.342 - 14417.920: 98.5485% ( 3) 00:11:36.415 14417.920 - 14477.498: 98.5644% ( 2) 00:11:36.415 14477.498 - 14537.076: 98.5723% ( 1) 00:11:36.415 14537.076 - 14596.655: 98.5961% ( 3) 00:11:36.415 14596.655 - 14656.233: 98.6120% ( 2) 00:11:36.415 14656.233 - 14715.811: 98.6279% ( 2) 00:11:36.415 14715.811 - 14775.389: 98.6437% ( 2) 00:11:36.415 14775.389 - 14834.967: 98.6675% ( 3) 00:11:36.415 14834.967 - 14894.545: 98.6913% ( 3) 00:11:36.415 14894.545 - 14954.124: 98.7072% ( 2) 00:11:36.415 14954.124 - 15013.702: 98.7230% ( 2) 00:11:36.415 15013.702 - 15073.280: 98.7468% ( 3) 00:11:36.415 15073.280 - 15132.858: 98.7627% ( 2) 00:11:36.415 15132.858 - 15192.436: 98.7944% ( 4) 00:11:36.415 15252.015 - 15371.171: 98.8341% ( 5) 00:11:36.415 15371.171 - 15490.327: 98.8658% ( 4) 00:11:36.415 15490.327 - 15609.484: 98.9134% ( 6) 00:11:36.415 15609.484 - 15728.640: 98.9530% ( 5) 00:11:36.415 15728.640 - 15847.796: 98.9848% ( 4) 00:11:36.415 27525.120 - 27644.276: 99.0006% ( 2) 00:11:36.415 27644.276 - 27763.433: 99.0244% ( 3) 00:11:36.415 27763.433 - 27882.589: 99.0403% ( 2) 00:11:36.415 27882.589 - 28001.745: 99.0641% ( 3) 00:11:36.415 28001.745 - 28120.902: 99.0879% ( 3) 00:11:36.415 28120.902 - 28240.058: 99.1117% ( 3) 00:11:36.415 28240.058 - 28359.215: 99.1275% ( 2) 00:11:36.415 28359.215 - 28478.371: 99.1355% ( 1) 00:11:36.415 28478.371 - 28597.527: 99.1513% ( 2) 00:11:36.415 28597.527 - 28716.684: 99.1831% ( 4) 00:11:36.415 28716.684 - 28835.840: 99.1989% ( 2) 00:11:36.415 28835.840 - 28954.996: 99.2227% ( 3) 00:11:36.415 28954.996 - 29074.153: 99.2386% ( 2) 00:11:36.415 29074.153 - 29193.309: 99.2703% ( 4) 00:11:36.415 29193.309 - 29312.465: 99.2862% ( 2) 00:11:36.415 29312.465 - 29431.622: 99.3179% ( 4) 00:11:36.415 29431.622 - 29550.778: 99.3338% ( 2) 00:11:36.415 29550.778 - 29669.935: 99.3496% ( 2) 00:11:36.415 29669.935 - 29789.091: 99.3813% ( 4) 00:11:36.415 29789.091 - 29908.247: 99.3972% ( 2) 00:11:36.415 29908.247 - 30027.404: 99.4210% ( 3) 00:11:36.415 30027.404 - 30146.560: 99.4448% ( 3) 00:11:36.415 30146.560 - 30265.716: 99.4686% ( 3) 00:11:36.415 30265.716 - 30384.873: 99.4924% ( 3) 00:11:36.415 34555.345 - 34793.658: 99.5003% ( 1) 00:11:36.415 34793.658 - 35031.971: 99.5400% ( 5) 00:11:36.415 35031.971 - 35270.284: 99.5876% ( 6) 00:11:36.415 35270.284 - 35508.596: 99.6272% ( 5) 00:11:36.415 35508.596 - 35746.909: 99.6748% ( 6) 00:11:36.415 35746.909 - 35985.222: 99.7145% ( 5) 00:11:36.415 35985.222 - 36223.535: 99.7621% ( 6) 00:11:36.415 36223.535 - 36461.847: 99.8096% ( 6) 00:11:36.415 36461.847 - 36700.160: 99.8572% ( 6) 00:11:36.415 36700.160 - 36938.473: 99.8969% ( 5) 00:11:36.415 36938.473 - 37176.785: 99.9286% ( 4) 00:11:36.415 37176.785 - 37415.098: 99.9841% ( 7) 00:11:36.415 37415.098 - 37653.411: 100.0000% ( 2) 00:11:36.415 00:11:36.415 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:36.415 ============================================================================== 00:11:36.415 Range in us Cumulative IO count 00:11:36.415 7983.476 - 8043.055: 0.0079% ( 1) 00:11:36.415 8043.055 - 8102.633: 0.0555% ( 6) 00:11:36.415 8102.633 - 8162.211: 0.1190% ( 8) 00:11:36.415 8162.211 - 8221.789: 0.2459% ( 16) 00:11:36.415 8221.789 - 8281.367: 0.5631% ( 40) 00:11:36.415 8281.367 - 8340.945: 1.1421% ( 73) 00:11:36.415 8340.945 - 8400.524: 1.8798% ( 93) 00:11:36.415 8400.524 - 8460.102: 2.7602% ( 111) 00:11:36.416 8460.102 - 8519.680: 3.8626% ( 139) 00:11:36.416 8519.680 - 8579.258: 5.1396% ( 161) 00:11:36.416 8579.258 - 8638.836: 6.5197% ( 174) 00:11:36.416 8638.836 - 8698.415: 7.8442% ( 167) 00:11:36.416 8698.415 - 8757.993: 9.1609% ( 166) 00:11:36.416 8757.993 - 8817.571: 10.4933% ( 168) 00:11:36.416 8817.571 - 8877.149: 11.9527% ( 184) 00:11:36.416 8877.149 - 8936.727: 13.4835% ( 193) 00:11:36.416 8936.727 - 8996.305: 14.9667% ( 187) 00:11:36.416 8996.305 - 9055.884: 16.5213% ( 196) 00:11:36.416 9055.884 - 9115.462: 18.2345% ( 216) 00:11:36.416 9115.462 - 9175.040: 20.1063% ( 236) 00:11:36.416 9175.040 - 9234.618: 22.2081% ( 265) 00:11:36.416 9234.618 - 9294.196: 24.6272% ( 305) 00:11:36.416 9294.196 - 9353.775: 27.3160% ( 339) 00:11:36.416 9353.775 - 9413.353: 30.1079% ( 352) 00:11:36.416 9413.353 - 9472.931: 32.9235% ( 355) 00:11:36.416 9472.931 - 9532.509: 35.5964% ( 337) 00:11:36.416 9532.509 - 9592.087: 38.4121% ( 355) 00:11:36.416 9592.087 - 9651.665: 41.1723% ( 348) 00:11:36.416 9651.665 - 9711.244: 44.1148% ( 371) 00:11:36.416 9711.244 - 9770.822: 46.9940% ( 363) 00:11:36.416 9770.822 - 9830.400: 49.8890% ( 365) 00:11:36.416 9830.400 - 9889.978: 52.7919% ( 366) 00:11:36.416 9889.978 - 9949.556: 55.7265% ( 370) 00:11:36.416 9949.556 - 10009.135: 58.7008% ( 375) 00:11:36.416 10009.135 - 10068.713: 61.6196% ( 368) 00:11:36.416 10068.713 - 10128.291: 64.4749% ( 360) 00:11:36.416 10128.291 - 10187.869: 67.2827% ( 354) 00:11:36.416 10187.869 - 10247.447: 70.1221% ( 358) 00:11:36.416 10247.447 - 10307.025: 72.7951% ( 337) 00:11:36.416 10307.025 - 10366.604: 75.4124% ( 330) 00:11:36.416 10366.604 - 10426.182: 77.8315% ( 305) 00:11:36.416 10426.182 - 10485.760: 79.9175% ( 263) 00:11:36.416 10485.760 - 10545.338: 81.7338% ( 229) 00:11:36.416 10545.338 - 10604.916: 83.3360% ( 202) 00:11:36.416 10604.916 - 10664.495: 84.7161% ( 174) 00:11:36.416 10664.495 - 10724.073: 85.9692% ( 158) 00:11:36.416 10724.073 - 10783.651: 87.0638% ( 138) 00:11:36.416 10783.651 - 10843.229: 87.9442% ( 111) 00:11:36.416 10843.229 - 10902.807: 88.6818% ( 93) 00:11:36.416 10902.807 - 10962.385: 89.3084% ( 79) 00:11:36.416 10962.385 - 11021.964: 89.8319% ( 66) 00:11:36.416 11021.964 - 11081.542: 90.3395% ( 64) 00:11:36.416 11081.542 - 11141.120: 90.7678% ( 54) 00:11:36.416 11141.120 - 11200.698: 91.1961% ( 54) 00:11:36.416 11200.698 - 11260.276: 91.6720% ( 60) 00:11:36.416 11260.276 - 11319.855: 92.0685% ( 50) 00:11:36.416 11319.855 - 11379.433: 92.4968% ( 54) 00:11:36.416 11379.433 - 11439.011: 92.9489% ( 57) 00:11:36.416 11439.011 - 11498.589: 93.3931% ( 56) 00:11:36.416 11498.589 - 11558.167: 93.8135% ( 53) 00:11:36.416 11558.167 - 11617.745: 94.1862% ( 47) 00:11:36.416 11617.745 - 11677.324: 94.5987% ( 52) 00:11:36.416 11677.324 - 11736.902: 94.9635% ( 46) 00:11:36.416 11736.902 - 11796.480: 95.2649% ( 38) 00:11:36.416 11796.480 - 11856.058: 95.5504% ( 36) 00:11:36.416 11856.058 - 11915.636: 95.8598% ( 39) 00:11:36.416 11915.636 - 11975.215: 96.0819% ( 28) 00:11:36.416 11975.215 - 12034.793: 96.3198% ( 30) 00:11:36.416 12034.793 - 12094.371: 96.5260% ( 26) 00:11:36.416 12094.371 - 12153.949: 96.7402% ( 27) 00:11:36.416 12153.949 - 12213.527: 96.9305% ( 24) 00:11:36.416 12213.527 - 12273.105: 97.1288% ( 25) 00:11:36.416 12273.105 - 12332.684: 97.3271% ( 25) 00:11:36.416 12332.684 - 12392.262: 97.4461% ( 15) 00:11:36.416 12392.262 - 12451.840: 97.5492% ( 13) 00:11:36.416 12451.840 - 12511.418: 97.6047% ( 7) 00:11:36.416 12511.418 - 12570.996: 97.6444% ( 5) 00:11:36.416 12570.996 - 12630.575: 97.6999% ( 7) 00:11:36.416 12630.575 - 12690.153: 97.7316% ( 4) 00:11:36.416 12690.153 - 12749.731: 97.7713% ( 5) 00:11:36.416 12749.731 - 12809.309: 97.8109% ( 5) 00:11:36.416 12809.309 - 12868.887: 97.8426% ( 4) 00:11:36.416 12868.887 - 12928.465: 97.8664% ( 3) 00:11:36.416 12928.465 - 12988.044: 97.9061% ( 5) 00:11:36.416 12988.044 - 13047.622: 97.9457% ( 5) 00:11:36.416 13047.622 - 13107.200: 97.9854% ( 5) 00:11:36.416 13107.200 - 13166.778: 98.0013% ( 2) 00:11:36.416 13166.778 - 13226.356: 98.0409% ( 5) 00:11:36.416 13226.356 - 13285.935: 98.0806% ( 5) 00:11:36.416 13285.935 - 13345.513: 98.0964% ( 2) 00:11:36.416 13345.513 - 13405.091: 98.1123% ( 2) 00:11:36.416 13405.091 - 13464.669: 98.1282% ( 2) 00:11:36.416 13464.669 - 13524.247: 98.1440% ( 2) 00:11:36.416 13524.247 - 13583.825: 98.1599% ( 2) 00:11:36.416 13583.825 - 13643.404: 98.1758% ( 2) 00:11:36.416 13643.404 - 13702.982: 98.1916% ( 2) 00:11:36.416 13702.982 - 13762.560: 98.2075% ( 2) 00:11:36.416 13762.560 - 13822.138: 98.2234% ( 2) 00:11:36.416 13822.138 - 13881.716: 98.2392% ( 2) 00:11:36.416 13881.716 - 13941.295: 98.2551% ( 2) 00:11:36.416 13941.295 - 14000.873: 98.2709% ( 2) 00:11:36.416 14000.873 - 14060.451: 98.2947% ( 3) 00:11:36.416 14060.451 - 14120.029: 98.3027% ( 1) 00:11:36.416 14120.029 - 14179.607: 98.3185% ( 2) 00:11:36.416 14179.607 - 14239.185: 98.3344% ( 2) 00:11:36.416 14239.185 - 14298.764: 98.3423% ( 1) 00:11:36.416 14298.764 - 14358.342: 98.3582% ( 2) 00:11:36.416 14358.342 - 14417.920: 98.3661% ( 1) 00:11:36.416 14417.920 - 14477.498: 98.3820% ( 2) 00:11:36.416 14477.498 - 14537.076: 98.3978% ( 2) 00:11:36.416 14537.076 - 14596.655: 98.4296% ( 4) 00:11:36.416 14596.655 - 14656.233: 98.4692% ( 5) 00:11:36.416 14656.233 - 14715.811: 98.5089% ( 5) 00:11:36.416 14715.811 - 14775.389: 98.5485% ( 5) 00:11:36.416 14775.389 - 14834.967: 98.5803% ( 4) 00:11:36.416 14834.967 - 14894.545: 98.6041% ( 3) 00:11:36.416 14894.545 - 14954.124: 98.6279% ( 3) 00:11:36.416 14954.124 - 15013.702: 98.6516% ( 3) 00:11:36.416 15013.702 - 15073.280: 98.6675% ( 2) 00:11:36.416 15073.280 - 15132.858: 98.6913% ( 3) 00:11:36.416 15132.858 - 15192.436: 98.7151% ( 3) 00:11:36.416 15192.436 - 15252.015: 98.7389% ( 3) 00:11:36.416 15252.015 - 15371.171: 98.7865% ( 6) 00:11:36.416 15371.171 - 15490.327: 98.8341% ( 6) 00:11:36.416 15490.327 - 15609.484: 98.8737% ( 5) 00:11:36.416 15609.484 - 15728.640: 98.9134% ( 5) 00:11:36.416 15728.640 - 15847.796: 98.9610% ( 6) 00:11:36.416 15847.796 - 15966.953: 98.9848% ( 3) 00:11:36.416 24665.367 - 24784.524: 98.9927% ( 1) 00:11:36.416 24784.524 - 24903.680: 99.0165% ( 3) 00:11:36.416 24903.680 - 25022.836: 99.0403% ( 3) 00:11:36.416 25022.836 - 25141.993: 99.0641% ( 3) 00:11:36.416 25141.993 - 25261.149: 99.0879% ( 3) 00:11:36.416 25261.149 - 25380.305: 99.1117% ( 3) 00:11:36.416 25380.305 - 25499.462: 99.1355% ( 3) 00:11:36.416 25499.462 - 25618.618: 99.1593% ( 3) 00:11:36.416 25618.618 - 25737.775: 99.1831% ( 3) 00:11:36.416 25737.775 - 25856.931: 99.2069% ( 3) 00:11:36.416 25856.931 - 25976.087: 99.2306% ( 3) 00:11:36.416 25976.087 - 26095.244: 99.2544% ( 3) 00:11:36.416 26095.244 - 26214.400: 99.2782% ( 3) 00:11:36.416 26214.400 - 26333.556: 99.3020% ( 3) 00:11:36.416 26333.556 - 26452.713: 99.3258% ( 3) 00:11:36.416 26452.713 - 26571.869: 99.3417% ( 2) 00:11:36.416 26571.869 - 26691.025: 99.3734% ( 4) 00:11:36.416 26691.025 - 26810.182: 99.3893% ( 2) 00:11:36.416 26810.182 - 26929.338: 99.4131% ( 3) 00:11:36.416 26929.338 - 27048.495: 99.4369% ( 3) 00:11:36.416 27048.495 - 27167.651: 99.4607% ( 3) 00:11:36.416 27167.651 - 27286.807: 99.4845% ( 3) 00:11:36.416 27286.807 - 27405.964: 99.4924% ( 1) 00:11:36.416 31695.593 - 31933.905: 99.5241% ( 4) 00:11:36.416 31933.905 - 32172.218: 99.5717% ( 6) 00:11:36.416 32172.218 - 32410.531: 99.6193% ( 6) 00:11:36.416 32410.531 - 32648.844: 99.6748% ( 7) 00:11:36.416 32648.844 - 32887.156: 99.7224% ( 6) 00:11:36.416 32887.156 - 33125.469: 99.7700% ( 6) 00:11:36.416 33125.469 - 33363.782: 99.8255% ( 7) 00:11:36.416 33363.782 - 33602.095: 99.8652% ( 5) 00:11:36.416 33602.095 - 33840.407: 99.9128% ( 6) 00:11:36.416 33840.407 - 34078.720: 99.9603% ( 6) 00:11:36.416 34078.720 - 34317.033: 100.0000% ( 5) 00:11:36.416 00:11:36.416 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:36.416 ============================================================================== 00:11:36.416 Range in us Cumulative IO count 00:11:36.416 7983.476 - 8043.055: 0.0159% ( 2) 00:11:36.416 8043.055 - 8102.633: 0.0476% ( 4) 00:11:36.416 8102.633 - 8162.211: 0.1428% ( 12) 00:11:36.416 8162.211 - 8221.789: 0.3014% ( 20) 00:11:36.416 8221.789 - 8281.367: 0.6742% ( 47) 00:11:36.416 8281.367 - 8340.945: 1.1739% ( 63) 00:11:36.416 8340.945 - 8400.524: 1.8004% ( 79) 00:11:36.416 8400.524 - 8460.102: 2.7046% ( 114) 00:11:36.416 8460.102 - 8519.680: 3.7595% ( 133) 00:11:36.416 8519.680 - 8579.258: 5.0048% ( 157) 00:11:36.416 8579.258 - 8638.836: 6.2659% ( 159) 00:11:36.416 8638.836 - 8698.415: 7.6221% ( 171) 00:11:36.416 8698.415 - 8757.993: 8.9626% ( 169) 00:11:36.416 8757.993 - 8817.571: 10.3347% ( 173) 00:11:36.416 8817.571 - 8877.149: 11.7703% ( 181) 00:11:36.416 8877.149 - 8936.727: 13.2297% ( 184) 00:11:36.416 8936.727 - 8996.305: 14.7367% ( 190) 00:11:36.416 8996.305 - 9055.884: 16.3468% ( 203) 00:11:36.416 9055.884 - 9115.462: 18.0679% ( 217) 00:11:36.416 9115.462 - 9175.040: 20.0666% ( 252) 00:11:36.416 9175.040 - 9234.618: 22.2398% ( 274) 00:11:36.416 9234.618 - 9294.196: 24.5955% ( 297) 00:11:36.416 9294.196 - 9353.775: 27.2446% ( 334) 00:11:36.416 9353.775 - 9413.353: 30.0365% ( 352) 00:11:36.416 9413.353 - 9472.931: 32.9156% ( 363) 00:11:36.416 9472.931 - 9532.509: 35.8027% ( 364) 00:11:36.417 9532.509 - 9592.087: 38.7452% ( 371) 00:11:36.417 9592.087 - 9651.665: 41.5768% ( 357) 00:11:36.417 9651.665 - 9711.244: 44.4162% ( 358) 00:11:36.417 9711.244 - 9770.822: 47.2319% ( 355) 00:11:36.417 9770.822 - 9830.400: 50.1428% ( 367) 00:11:36.417 9830.400 - 9889.978: 53.0615% ( 368) 00:11:36.417 9889.978 - 9949.556: 55.9248% ( 361) 00:11:36.417 9949.556 - 10009.135: 58.8753% ( 372) 00:11:36.417 10009.135 - 10068.713: 61.7465% ( 362) 00:11:36.417 10068.713 - 10128.291: 64.5225% ( 350) 00:11:36.417 10128.291 - 10187.869: 67.2906% ( 349) 00:11:36.417 10187.869 - 10247.447: 70.0349% ( 346) 00:11:36.417 10247.447 - 10307.025: 72.6047% ( 324) 00:11:36.417 10307.025 - 10366.604: 75.0952% ( 314) 00:11:36.417 10366.604 - 10426.182: 77.3160% ( 280) 00:11:36.417 10426.182 - 10485.760: 79.3464% ( 256) 00:11:36.417 10485.760 - 10545.338: 81.1310% ( 225) 00:11:36.417 10545.338 - 10604.916: 82.6697% ( 194) 00:11:36.417 10604.916 - 10664.495: 84.1291% ( 184) 00:11:36.417 10664.495 - 10724.073: 85.4299% ( 164) 00:11:36.417 10724.073 - 10783.651: 86.5482% ( 141) 00:11:36.417 10783.651 - 10843.229: 87.5635% ( 128) 00:11:36.417 10843.229 - 10902.807: 88.4914% ( 117) 00:11:36.417 10902.807 - 10962.385: 89.2291% ( 93) 00:11:36.417 10962.385 - 11021.964: 89.8556% ( 79) 00:11:36.417 11021.964 - 11081.542: 90.3950% ( 68) 00:11:36.417 11081.542 - 11141.120: 90.9898% ( 75) 00:11:36.417 11141.120 - 11200.698: 91.4499% ( 58) 00:11:36.417 11200.698 - 11260.276: 91.9496% ( 63) 00:11:36.417 11260.276 - 11319.855: 92.3541% ( 51) 00:11:36.417 11319.855 - 11379.433: 92.7665% ( 52) 00:11:36.417 11379.433 - 11439.011: 93.2265% ( 58) 00:11:36.417 11439.011 - 11498.589: 93.6786% ( 57) 00:11:36.417 11498.589 - 11558.167: 94.1069% ( 54) 00:11:36.417 11558.167 - 11617.745: 94.5194% ( 52) 00:11:36.417 11617.745 - 11677.324: 94.9001% ( 48) 00:11:36.417 11677.324 - 11736.902: 95.2094% ( 39) 00:11:36.417 11736.902 - 11796.480: 95.5187% ( 39) 00:11:36.417 11796.480 - 11856.058: 95.8439% ( 41) 00:11:36.417 11856.058 - 11915.636: 96.1215% ( 35) 00:11:36.417 11915.636 - 11975.215: 96.3357% ( 27) 00:11:36.417 11975.215 - 12034.793: 96.5815% ( 31) 00:11:36.417 12034.793 - 12094.371: 96.7402% ( 20) 00:11:36.417 12094.371 - 12153.949: 96.9147% ( 22) 00:11:36.417 12153.949 - 12213.527: 97.0891% ( 22) 00:11:36.417 12213.527 - 12273.105: 97.2557% ( 21) 00:11:36.417 12273.105 - 12332.684: 97.4064% ( 19) 00:11:36.417 12332.684 - 12392.262: 97.5016% ( 12) 00:11:36.417 12392.262 - 12451.840: 97.5730% ( 9) 00:11:36.417 12451.840 - 12511.418: 97.6444% ( 9) 00:11:36.417 12511.418 - 12570.996: 97.6919% ( 6) 00:11:36.417 12570.996 - 12630.575: 97.7316% ( 5) 00:11:36.417 12630.575 - 12690.153: 97.7713% ( 5) 00:11:36.417 12690.153 - 12749.731: 97.8030% ( 4) 00:11:36.417 12749.731 - 12809.309: 97.8188% ( 2) 00:11:36.417 12809.309 - 12868.887: 97.8347% ( 2) 00:11:36.417 12868.887 - 12928.465: 97.8585% ( 3) 00:11:36.417 12928.465 - 12988.044: 97.8744% ( 2) 00:11:36.417 12988.044 - 13047.622: 97.8902% ( 2) 00:11:36.417 13047.622 - 13107.200: 97.9061% ( 2) 00:11:36.417 13107.200 - 13166.778: 97.9220% ( 2) 00:11:36.417 13166.778 - 13226.356: 97.9537% ( 4) 00:11:36.417 13226.356 - 13285.935: 97.9854% ( 4) 00:11:36.417 13285.935 - 13345.513: 98.0092% ( 3) 00:11:36.417 13345.513 - 13405.091: 98.0251% ( 2) 00:11:36.417 13405.091 - 13464.669: 98.0409% ( 2) 00:11:36.417 13464.669 - 13524.247: 98.0647% ( 3) 00:11:36.417 13524.247 - 13583.825: 98.0806% ( 2) 00:11:36.417 13583.825 - 13643.404: 98.1044% ( 3) 00:11:36.417 13643.404 - 13702.982: 98.1202% ( 2) 00:11:36.417 13702.982 - 13762.560: 98.1440% ( 3) 00:11:36.417 13762.560 - 13822.138: 98.1599% ( 2) 00:11:36.417 13822.138 - 13881.716: 98.1837% ( 3) 00:11:36.417 13881.716 - 13941.295: 98.2154% ( 4) 00:11:36.417 13941.295 - 14000.873: 98.2392% ( 3) 00:11:36.417 14000.873 - 14060.451: 98.2709% ( 4) 00:11:36.417 14060.451 - 14120.029: 98.3185% ( 6) 00:11:36.417 14120.029 - 14179.607: 98.3661% ( 6) 00:11:36.417 14179.607 - 14239.185: 98.4058% ( 5) 00:11:36.417 14239.185 - 14298.764: 98.4454% ( 5) 00:11:36.417 14298.764 - 14358.342: 98.4930% ( 6) 00:11:36.417 14358.342 - 14417.920: 98.5406% ( 6) 00:11:36.417 14417.920 - 14477.498: 98.5803% ( 5) 00:11:36.417 14477.498 - 14537.076: 98.6199% ( 5) 00:11:36.417 14537.076 - 14596.655: 98.6675% ( 6) 00:11:36.417 14596.655 - 14656.233: 98.7151% ( 6) 00:11:36.417 14656.233 - 14715.811: 98.7627% ( 6) 00:11:36.417 14715.811 - 14775.389: 98.8023% ( 5) 00:11:36.417 14775.389 - 14834.967: 98.8341% ( 4) 00:11:36.417 14834.967 - 14894.545: 98.8579% ( 3) 00:11:36.417 14894.545 - 14954.124: 98.8817% ( 3) 00:11:36.417 14954.124 - 15013.702: 98.8975% ( 2) 00:11:36.417 15013.702 - 15073.280: 98.9134% ( 2) 00:11:36.417 15073.280 - 15132.858: 98.9372% ( 3) 00:11:36.417 15132.858 - 15192.436: 98.9610% ( 3) 00:11:36.417 15192.436 - 15252.015: 98.9848% ( 3) 00:11:36.417 22163.084 - 22282.240: 99.0086% ( 3) 00:11:36.417 22282.240 - 22401.396: 99.0403% ( 4) 00:11:36.417 22401.396 - 22520.553: 99.0641% ( 3) 00:11:36.417 22520.553 - 22639.709: 99.0799% ( 2) 00:11:36.417 22639.709 - 22758.865: 99.1037% ( 3) 00:11:36.417 22758.865 - 22878.022: 99.1275% ( 3) 00:11:36.417 22878.022 - 22997.178: 99.1513% ( 3) 00:11:36.417 22997.178 - 23116.335: 99.1672% ( 2) 00:11:36.417 23116.335 - 23235.491: 99.1910% ( 3) 00:11:36.417 23235.491 - 23354.647: 99.2148% ( 3) 00:11:36.417 23354.647 - 23473.804: 99.2386% ( 3) 00:11:36.417 23473.804 - 23592.960: 99.2703% ( 4) 00:11:36.417 23592.960 - 23712.116: 99.2862% ( 2) 00:11:36.417 23712.116 - 23831.273: 99.3100% ( 3) 00:11:36.417 23831.273 - 23950.429: 99.3338% ( 3) 00:11:36.417 23950.429 - 24069.585: 99.3576% ( 3) 00:11:36.417 24069.585 - 24188.742: 99.3813% ( 3) 00:11:36.417 24188.742 - 24307.898: 99.4051% ( 3) 00:11:36.417 24307.898 - 24427.055: 99.4369% ( 4) 00:11:36.417 24427.055 - 24546.211: 99.4607% ( 3) 00:11:36.417 24546.211 - 24665.367: 99.4845% ( 3) 00:11:36.417 24665.367 - 24784.524: 99.4924% ( 1) 00:11:36.417 29312.465 - 29431.622: 99.5082% ( 2) 00:11:36.417 29431.622 - 29550.778: 99.5241% ( 2) 00:11:36.417 29550.778 - 29669.935: 99.5400% ( 2) 00:11:36.417 29669.935 - 29789.091: 99.5717% ( 4) 00:11:36.417 29789.091 - 29908.247: 99.5955% ( 3) 00:11:36.417 29908.247 - 30027.404: 99.6193% ( 3) 00:11:36.417 30027.404 - 30146.560: 99.6352% ( 2) 00:11:36.417 30146.560 - 30265.716: 99.6589% ( 3) 00:11:36.417 30265.716 - 30384.873: 99.6827% ( 3) 00:11:36.417 30384.873 - 30504.029: 99.6986% ( 2) 00:11:36.417 30504.029 - 30742.342: 99.7462% ( 6) 00:11:36.417 30742.342 - 30980.655: 99.7938% ( 6) 00:11:36.417 30980.655 - 31218.967: 99.8414% ( 6) 00:11:36.417 31218.967 - 31457.280: 99.8969% ( 7) 00:11:36.417 31457.280 - 31695.593: 99.9445% ( 6) 00:11:36.417 31695.593 - 31933.905: 99.9921% ( 6) 00:11:36.417 31933.905 - 32172.218: 100.0000% ( 1) 00:11:36.417 00:11:36.417 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:36.417 ============================================================================== 00:11:36.417 Range in us Cumulative IO count 00:11:36.417 8043.055 - 8102.633: 0.0317% ( 4) 00:11:36.417 8102.633 - 8162.211: 0.1031% ( 9) 00:11:36.417 8162.211 - 8221.789: 0.3014% ( 25) 00:11:36.417 8221.789 - 8281.367: 0.6424% ( 43) 00:11:36.418 8281.367 - 8340.945: 1.0866% ( 56) 00:11:36.418 8340.945 - 8400.524: 1.6973% ( 77) 00:11:36.418 8400.524 - 8460.102: 2.6570% ( 121) 00:11:36.418 8460.102 - 8519.680: 3.6723% ( 128) 00:11:36.418 8519.680 - 8579.258: 4.8858% ( 153) 00:11:36.418 8579.258 - 8638.836: 6.1865% ( 164) 00:11:36.418 8638.836 - 8698.415: 7.4477% ( 159) 00:11:36.418 8698.415 - 8757.993: 8.8991% ( 183) 00:11:36.418 8757.993 - 8817.571: 10.2951% ( 176) 00:11:36.418 8817.571 - 8877.149: 11.7624% ( 185) 00:11:36.418 8877.149 - 8936.727: 13.2931% ( 193) 00:11:36.418 8936.727 - 8996.305: 14.8477% ( 196) 00:11:36.418 8996.305 - 9055.884: 16.4578% ( 203) 00:11:36.418 9055.884 - 9115.462: 18.2186% ( 222) 00:11:36.418 9115.462 - 9175.040: 20.2966% ( 262) 00:11:36.418 9175.040 - 9234.618: 22.4857% ( 276) 00:11:36.418 9234.618 - 9294.196: 24.9524% ( 311) 00:11:36.418 9294.196 - 9353.775: 27.6174% ( 336) 00:11:36.418 9353.775 - 9413.353: 30.2982% ( 338) 00:11:36.418 9413.353 - 9472.931: 33.0029% ( 341) 00:11:36.418 9472.931 - 9532.509: 35.8661% ( 361) 00:11:36.418 9532.509 - 9592.087: 38.6818% ( 355) 00:11:36.418 9592.087 - 9651.665: 41.4578% ( 350) 00:11:36.418 9651.665 - 9711.244: 44.2259% ( 349) 00:11:36.418 9711.244 - 9770.822: 47.0019% ( 350) 00:11:36.418 9770.822 - 9830.400: 49.8255% ( 356) 00:11:36.418 9830.400 - 9889.978: 52.5936% ( 349) 00:11:36.418 9889.978 - 9949.556: 55.4251% ( 357) 00:11:36.418 9949.556 - 10009.135: 58.1694% ( 346) 00:11:36.418 10009.135 - 10068.713: 61.0168% ( 359) 00:11:36.418 10068.713 - 10128.291: 63.7770% ( 348) 00:11:36.418 10128.291 - 10187.869: 66.4895% ( 342) 00:11:36.418 10187.869 - 10247.447: 69.1069% ( 330) 00:11:36.418 10247.447 - 10307.025: 71.6926% ( 326) 00:11:36.418 10307.025 - 10366.604: 74.1275% ( 307) 00:11:36.418 10366.604 - 10426.182: 76.4277% ( 290) 00:11:36.418 10426.182 - 10485.760: 78.5930% ( 273) 00:11:36.418 10485.760 - 10545.338: 80.4727% ( 237) 00:11:36.418 10545.338 - 10604.916: 82.1383% ( 210) 00:11:36.418 10604.916 - 10664.495: 83.6532% ( 191) 00:11:36.418 10664.495 - 10724.073: 85.0254% ( 173) 00:11:36.418 10724.073 - 10783.651: 86.2944% ( 160) 00:11:36.418 10783.651 - 10843.229: 87.3255% ( 130) 00:11:36.418 10843.229 - 10902.807: 88.2297% ( 114) 00:11:36.418 10902.807 - 10962.385: 89.0466% ( 103) 00:11:36.418 10962.385 - 11021.964: 89.7605% ( 90) 00:11:36.418 11021.964 - 11081.542: 90.4029% ( 81) 00:11:36.418 11081.542 - 11141.120: 90.8788% ( 60) 00:11:36.418 11141.120 - 11200.698: 91.3547% ( 60) 00:11:36.418 11200.698 - 11260.276: 91.8544% ( 63) 00:11:36.418 11260.276 - 11319.855: 92.3858% ( 67) 00:11:36.418 11319.855 - 11379.433: 92.9251% ( 68) 00:11:36.418 11379.433 - 11439.011: 93.4407% ( 65) 00:11:36.418 11439.011 - 11498.589: 93.9800% ( 68) 00:11:36.418 11498.589 - 11558.167: 94.4400% ( 58) 00:11:36.418 11558.167 - 11617.745: 94.8763% ( 55) 00:11:36.418 11617.745 - 11677.324: 95.2808% ( 51) 00:11:36.418 11677.324 - 11736.902: 95.6773% ( 50) 00:11:36.418 11736.902 - 11796.480: 96.0343% ( 45) 00:11:36.418 11796.480 - 11856.058: 96.3515% ( 40) 00:11:36.418 11856.058 - 11915.636: 96.6133% ( 33) 00:11:36.418 11915.636 - 11975.215: 96.7798% ( 21) 00:11:36.418 11975.215 - 12034.793: 96.9067% ( 16) 00:11:36.418 12034.793 - 12094.371: 96.9940% ( 11) 00:11:36.418 12094.371 - 12153.949: 97.1050% ( 14) 00:11:36.418 12153.949 - 12213.527: 97.2002% ( 12) 00:11:36.418 12213.527 - 12273.105: 97.3033% ( 13) 00:11:36.418 12273.105 - 12332.684: 97.4064% ( 13) 00:11:36.418 12332.684 - 12392.262: 97.4778% ( 9) 00:11:36.418 12392.262 - 12451.840: 97.5571% ( 10) 00:11:36.418 12451.840 - 12511.418: 97.5968% ( 5) 00:11:36.418 12511.418 - 12570.996: 97.6126% ( 2) 00:11:36.418 12570.996 - 12630.575: 97.6364% ( 3) 00:11:36.418 12630.575 - 12690.153: 97.6523% ( 2) 00:11:36.418 12690.153 - 12749.731: 97.6681% ( 2) 00:11:36.418 12749.731 - 12809.309: 97.6840% ( 2) 00:11:36.418 12809.309 - 12868.887: 97.7078% ( 3) 00:11:36.418 12868.887 - 12928.465: 97.7316% ( 3) 00:11:36.418 12928.465 - 12988.044: 97.7713% ( 5) 00:11:36.418 12988.044 - 13047.622: 97.8109% ( 5) 00:11:36.418 13047.622 - 13107.200: 97.8426% ( 4) 00:11:36.418 13107.200 - 13166.778: 97.8823% ( 5) 00:11:36.418 13166.778 - 13226.356: 97.9140% ( 4) 00:11:36.418 13226.356 - 13285.935: 97.9616% ( 6) 00:11:36.418 13285.935 - 13345.513: 97.9933% ( 4) 00:11:36.418 13345.513 - 13405.091: 98.0251% ( 4) 00:11:36.418 13405.091 - 13464.669: 98.0568% ( 4) 00:11:36.418 13464.669 - 13524.247: 98.0885% ( 4) 00:11:36.418 13524.247 - 13583.825: 98.1282% ( 5) 00:11:36.418 13583.825 - 13643.404: 98.1599% ( 4) 00:11:36.418 13643.404 - 13702.982: 98.2154% ( 7) 00:11:36.418 13702.982 - 13762.560: 98.2392% ( 3) 00:11:36.418 13762.560 - 13822.138: 98.2789% ( 5) 00:11:36.418 13822.138 - 13881.716: 98.3344% ( 7) 00:11:36.418 13881.716 - 13941.295: 98.3740% ( 5) 00:11:36.418 13941.295 - 14000.873: 98.4058% ( 4) 00:11:36.418 14000.873 - 14060.451: 98.4454% ( 5) 00:11:36.418 14060.451 - 14120.029: 98.4772% ( 4) 00:11:36.418 14120.029 - 14179.607: 98.5247% ( 6) 00:11:36.418 14179.607 - 14239.185: 98.5644% ( 5) 00:11:36.418 14239.185 - 14298.764: 98.6041% ( 5) 00:11:36.418 14298.764 - 14358.342: 98.6437% ( 5) 00:11:36.418 14358.342 - 14417.920: 98.6754% ( 4) 00:11:36.418 14417.920 - 14477.498: 98.7151% ( 5) 00:11:36.418 14477.498 - 14537.076: 98.7548% ( 5) 00:11:36.418 14537.076 - 14596.655: 98.7944% ( 5) 00:11:36.418 14596.655 - 14656.233: 98.8261% ( 4) 00:11:36.418 14656.233 - 14715.811: 98.8658% ( 5) 00:11:36.418 14715.811 - 14775.389: 98.8896% ( 3) 00:11:36.418 14775.389 - 14834.967: 98.9134% ( 3) 00:11:36.418 14834.967 - 14894.545: 98.9293% ( 2) 00:11:36.418 14894.545 - 14954.124: 98.9530% ( 3) 00:11:36.418 14954.124 - 15013.702: 98.9689% ( 2) 00:11:36.418 15013.702 - 15073.280: 98.9848% ( 2) 00:11:36.418 19184.175 - 19303.331: 98.9927% ( 1) 00:11:36.418 19303.331 - 19422.487: 99.0165% ( 3) 00:11:36.418 19422.487 - 19541.644: 99.0324% ( 2) 00:11:36.418 19541.644 - 19660.800: 99.0720% ( 5) 00:11:36.418 19660.800 - 19779.956: 99.0958% ( 3) 00:11:36.418 19779.956 - 19899.113: 99.1196% ( 3) 00:11:36.418 19899.113 - 20018.269: 99.1434% ( 3) 00:11:36.418 20018.269 - 20137.425: 99.1672% ( 3) 00:11:36.418 20137.425 - 20256.582: 99.1910% ( 3) 00:11:36.418 20256.582 - 20375.738: 99.2148% ( 3) 00:11:36.418 20375.738 - 20494.895: 99.2386% ( 3) 00:11:36.418 20494.895 - 20614.051: 99.2624% ( 3) 00:11:36.418 20614.051 - 20733.207: 99.2862% ( 3) 00:11:36.418 20733.207 - 20852.364: 99.3100% ( 3) 00:11:36.418 20852.364 - 20971.520: 99.3338% ( 3) 00:11:36.418 20971.520 - 21090.676: 99.3576% ( 3) 00:11:36.418 21090.676 - 21209.833: 99.3893% ( 4) 00:11:36.418 21209.833 - 21328.989: 99.4131% ( 3) 00:11:36.418 21328.989 - 21448.145: 99.4289% ( 2) 00:11:36.418 21448.145 - 21567.302: 99.4527% ( 3) 00:11:36.418 21567.302 - 21686.458: 99.4765% ( 3) 00:11:36.418 21686.458 - 21805.615: 99.4924% ( 2) 00:11:36.418 26452.713 - 26571.869: 99.5162% ( 3) 00:11:36.418 26571.869 - 26691.025: 99.5400% ( 3) 00:11:36.418 26691.025 - 26810.182: 99.5717% ( 4) 00:11:36.418 26810.182 - 26929.338: 99.5955% ( 3) 00:11:36.418 26929.338 - 27048.495: 99.6193% ( 3) 00:11:36.418 27048.495 - 27167.651: 99.6431% ( 3) 00:11:36.418 27167.651 - 27286.807: 99.6669% ( 3) 00:11:36.418 27286.807 - 27405.964: 99.6986% ( 4) 00:11:36.418 27405.964 - 27525.120: 99.7224% ( 3) 00:11:36.418 27525.120 - 27644.276: 99.7462% ( 3) 00:11:36.418 27644.276 - 27763.433: 99.7621% ( 2) 00:11:36.418 27763.433 - 27882.589: 99.7938% ( 4) 00:11:36.418 27882.589 - 28001.745: 99.8176% ( 3) 00:11:36.418 28001.745 - 28120.902: 99.8414% ( 3) 00:11:36.418 28120.902 - 28240.058: 99.8652% ( 3) 00:11:36.418 28240.058 - 28359.215: 99.8890% ( 3) 00:11:36.418 28359.215 - 28478.371: 99.9128% ( 3) 00:11:36.418 28478.371 - 28597.527: 99.9445% ( 4) 00:11:36.418 28597.527 - 28716.684: 99.9683% ( 3) 00:11:36.418 28716.684 - 28835.840: 99.9921% ( 3) 00:11:36.418 28835.840 - 28954.996: 100.0000% ( 1) 00:11:36.418 00:11:36.418 18:18:48 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:37.798 Initializing NVMe Controllers 00:11:37.798 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:37.798 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:37.798 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:37.798 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:37.798 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:37.798 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:37.798 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:37.798 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:37.798 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:37.798 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:37.798 Initialization complete. Launching workers. 00:11:37.798 ======================================================== 00:11:37.798 Latency(us) 00:11:37.798 Device Information : IOPS MiB/s Average min max 00:11:37.798 PCIE (0000:00:11.0) NSID 1 from core 0: 11082.37 129.87 11576.16 9446.29 47257.61 00:11:37.798 PCIE (0000:00:13.0) NSID 1 from core 0: 11082.37 129.87 11546.72 9559.62 44246.81 00:11:37.798 PCIE (0000:00:10.0) NSID 1 from core 0: 11082.37 129.87 11513.66 9358.69 41329.15 00:11:37.798 PCIE (0000:00:12.0) NSID 1 from core 0: 11082.37 129.87 11480.69 9252.31 37794.81 00:11:37.798 PCIE (0000:00:12.0) NSID 2 from core 0: 11082.37 129.87 11448.90 9208.76 35088.37 00:11:37.798 PCIE (0000:00:12.0) NSID 3 from core 0: 11082.37 129.87 11417.21 9317.39 32016.64 00:11:37.798 ======================================================== 00:11:37.798 Total : 66494.23 779.23 11497.22 9208.76 47257.61 00:11:37.798 00:11:37.798 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:37.798 ================================================================================= 00:11:37.798 1.00000% : 9949.556us 00:11:37.798 10.00000% : 10307.025us 00:11:37.798 25.00000% : 10604.916us 00:11:37.798 50.00000% : 11021.964us 00:11:37.798 75.00000% : 11617.745us 00:11:37.799 90.00000% : 12570.996us 00:11:37.799 95.00000% : 13822.138us 00:11:37.799 98.00000% : 15728.640us 00:11:37.799 99.00000% : 34793.658us 00:11:37.799 99.50000% : 44802.793us 00:11:37.799 99.90000% : 46947.607us 00:11:37.799 99.99000% : 47424.233us 00:11:37.799 99.99900% : 47424.233us 00:11:37.799 99.99990% : 47424.233us 00:11:37.799 99.99999% : 47424.233us 00:11:37.799 00:11:37.799 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:37.799 ================================================================================= 00:11:37.799 1.00000% : 9889.978us 00:11:37.799 10.00000% : 10307.025us 00:11:37.799 25.00000% : 10604.916us 00:11:37.799 50.00000% : 11021.964us 00:11:37.799 75.00000% : 11558.167us 00:11:37.799 90.00000% : 12451.840us 00:11:37.799 95.00000% : 13881.716us 00:11:37.799 98.00000% : 15847.796us 00:11:37.799 99.00000% : 33125.469us 00:11:37.799 99.50000% : 42181.353us 00:11:37.799 99.90000% : 43849.542us 00:11:37.799 99.99000% : 44326.167us 00:11:37.799 99.99900% : 44326.167us 00:11:37.799 99.99990% : 44326.167us 00:11:37.799 99.99999% : 44326.167us 00:11:37.799 00:11:37.799 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:37.799 ================================================================================= 00:11:37.799 1.00000% : 9770.822us 00:11:37.799 10.00000% : 10187.869us 00:11:37.799 25.00000% : 10545.338us 00:11:37.799 50.00000% : 11081.542us 00:11:37.799 75.00000% : 11617.745us 00:11:37.799 90.00000% : 12511.418us 00:11:37.799 95.00000% : 14120.029us 00:11:37.799 98.00000% : 15847.796us 00:11:37.799 99.00000% : 30265.716us 00:11:37.799 99.50000% : 38844.975us 00:11:37.799 99.90000% : 40989.789us 00:11:37.799 99.99000% : 41466.415us 00:11:37.799 99.99900% : 41466.415us 00:11:37.799 99.99990% : 41466.415us 00:11:37.799 99.99999% : 41466.415us 00:11:37.799 00:11:37.799 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:37.799 ================================================================================= 00:11:37.799 1.00000% : 9949.556us 00:11:37.799 10.00000% : 10366.604us 00:11:37.799 25.00000% : 10604.916us 00:11:37.799 50.00000% : 11021.964us 00:11:37.799 75.00000% : 11617.745us 00:11:37.799 90.00000% : 12511.418us 00:11:37.799 95.00000% : 14000.873us 00:11:37.799 98.00000% : 15728.640us 00:11:37.799 99.00000% : 27525.120us 00:11:37.799 99.50000% : 35746.909us 00:11:37.799 99.90000% : 37415.098us 00:11:37.799 99.99000% : 37891.724us 00:11:37.799 99.99900% : 37891.724us 00:11:37.799 99.99990% : 37891.724us 00:11:37.799 99.99999% : 37891.724us 00:11:37.799 00:11:37.799 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:37.799 ================================================================================= 00:11:37.799 1.00000% : 9949.556us 00:11:37.799 10.00000% : 10307.025us 00:11:37.799 25.00000% : 10604.916us 00:11:37.799 50.00000% : 11021.964us 00:11:37.799 75.00000% : 11617.745us 00:11:37.799 90.00000% : 12511.418us 00:11:37.799 95.00000% : 13762.560us 00:11:37.799 98.00000% : 15609.484us 00:11:37.799 99.00000% : 25618.618us 00:11:37.799 99.50000% : 30980.655us 00:11:37.799 99.90000% : 34793.658us 00:11:37.799 99.99000% : 35270.284us 00:11:37.799 99.99900% : 35270.284us 00:11:37.799 99.99990% : 35270.284us 00:11:37.799 99.99999% : 35270.284us 00:11:37.799 00:11:37.799 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:37.799 ================================================================================= 00:11:37.799 1.00000% : 9949.556us 00:11:37.799 10.00000% : 10307.025us 00:11:37.799 25.00000% : 10604.916us 00:11:37.799 50.00000% : 11021.964us 00:11:37.799 75.00000% : 11617.745us 00:11:37.799 90.00000% : 12570.996us 00:11:37.799 95.00000% : 13822.138us 00:11:37.799 98.00000% : 15728.640us 00:11:37.799 99.00000% : 22997.178us 00:11:37.799 99.50000% : 28240.058us 00:11:37.799 99.90000% : 31695.593us 00:11:37.799 99.99000% : 32172.218us 00:11:37.799 99.99900% : 32172.218us 00:11:37.799 99.99990% : 32172.218us 00:11:37.799 99.99999% : 32172.218us 00:11:37.799 00:11:37.799 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:37.799 ============================================================================== 00:11:37.799 Range in us Cumulative IO count 00:11:37.799 9413.353 - 9472.931: 0.0180% ( 2) 00:11:37.799 9472.931 - 9532.509: 0.0449% ( 3) 00:11:37.799 9532.509 - 9592.087: 0.0808% ( 4) 00:11:37.799 9592.087 - 9651.665: 0.1616% ( 9) 00:11:37.799 9651.665 - 9711.244: 0.2874% ( 14) 00:11:37.799 9711.244 - 9770.822: 0.4041% ( 13) 00:11:37.799 9770.822 - 9830.400: 0.6106% ( 23) 00:11:37.799 9830.400 - 9889.978: 0.9519% ( 38) 00:11:37.799 9889.978 - 9949.556: 1.4907% ( 60) 00:11:37.799 9949.556 - 10009.135: 2.2001% ( 79) 00:11:37.799 10009.135 - 10068.713: 3.1070% ( 101) 00:11:37.799 10068.713 - 10128.291: 4.3193% ( 135) 00:11:37.799 10128.291 - 10187.869: 5.7381% ( 158) 00:11:37.799 10187.869 - 10247.447: 7.7137% ( 220) 00:11:37.799 10247.447 - 10307.025: 10.1203% ( 268) 00:11:37.799 10307.025 - 10366.604: 12.9041% ( 310) 00:11:37.799 10366.604 - 10426.182: 16.1369% ( 360) 00:11:37.799 10426.182 - 10485.760: 19.3427% ( 357) 00:11:37.799 10485.760 - 10545.338: 22.7281% ( 377) 00:11:37.799 10545.338 - 10604.916: 26.1584% ( 382) 00:11:37.799 10604.916 - 10664.495: 29.5348% ( 376) 00:11:37.799 10664.495 - 10724.073: 33.1178% ( 399) 00:11:37.799 10724.073 - 10783.651: 36.7816% ( 408) 00:11:37.799 10783.651 - 10843.229: 40.5981% ( 425) 00:11:37.799 10843.229 - 10902.807: 44.2259% ( 404) 00:11:37.799 10902.807 - 10962.385: 47.9705% ( 417) 00:11:37.799 10962.385 - 11021.964: 51.4547% ( 388) 00:11:37.799 11021.964 - 11081.542: 54.9928% ( 394) 00:11:37.799 11081.542 - 11141.120: 58.0819% ( 344) 00:11:37.799 11141.120 - 11200.698: 61.0183% ( 327) 00:11:37.799 11200.698 - 11260.276: 63.5417% ( 281) 00:11:37.799 11260.276 - 11319.855: 65.9573% ( 269) 00:11:37.799 11319.855 - 11379.433: 68.1932% ( 249) 00:11:37.800 11379.433 - 11439.011: 70.3754% ( 243) 00:11:37.800 11439.011 - 11498.589: 72.6203% ( 250) 00:11:37.800 11498.589 - 11558.167: 74.7755% ( 240) 00:11:37.800 11558.167 - 11617.745: 76.5984% ( 203) 00:11:37.800 11617.745 - 11677.324: 78.4483% ( 206) 00:11:37.800 11677.324 - 11736.902: 80.0467% ( 178) 00:11:37.800 11736.902 - 11796.480: 81.3937% ( 150) 00:11:37.800 11796.480 - 11856.058: 82.5970% ( 134) 00:11:37.800 11856.058 - 11915.636: 83.6656% ( 119) 00:11:37.800 11915.636 - 11975.215: 84.5995% ( 104) 00:11:37.800 11975.215 - 12034.793: 85.5244% ( 103) 00:11:37.800 12034.793 - 12094.371: 86.2428% ( 80) 00:11:37.800 12094.371 - 12153.949: 86.8894% ( 72) 00:11:37.800 12153.949 - 12213.527: 87.4910% ( 67) 00:11:37.800 12213.527 - 12273.105: 88.1196% ( 70) 00:11:37.800 12273.105 - 12332.684: 88.6943% ( 64) 00:11:37.800 12332.684 - 12392.262: 89.1882% ( 55) 00:11:37.800 12392.262 - 12451.840: 89.5923% ( 45) 00:11:37.800 12451.840 - 12511.418: 89.9784% ( 43) 00:11:37.800 12511.418 - 12570.996: 90.3646% ( 43) 00:11:37.800 12570.996 - 12630.575: 90.6968% ( 37) 00:11:37.800 12630.575 - 12690.153: 90.9573% ( 29) 00:11:37.800 12690.153 - 12749.731: 91.2356% ( 31) 00:11:37.800 12749.731 - 12809.309: 91.5230% ( 32) 00:11:37.800 12809.309 - 12868.887: 91.7834% ( 29) 00:11:37.800 12868.887 - 12928.465: 92.0797% ( 33) 00:11:37.800 12928.465 - 12988.044: 92.3761% ( 33) 00:11:37.800 12988.044 - 13047.622: 92.6634% ( 32) 00:11:37.800 13047.622 - 13107.200: 92.9508% ( 32) 00:11:37.800 13107.200 - 13166.778: 93.1932% ( 27) 00:11:37.800 13166.778 - 13226.356: 93.4626% ( 30) 00:11:37.800 13226.356 - 13285.935: 93.6333% ( 19) 00:11:37.800 13285.935 - 13345.513: 93.7410% ( 12) 00:11:37.800 13345.513 - 13405.091: 93.9296% ( 21) 00:11:37.800 13405.091 - 13464.669: 94.1451% ( 24) 00:11:37.800 13464.669 - 13524.247: 94.3247% ( 20) 00:11:37.800 13524.247 - 13583.825: 94.5133% ( 21) 00:11:37.800 13583.825 - 13643.404: 94.6300% ( 13) 00:11:37.800 13643.404 - 13702.982: 94.7647% ( 15) 00:11:37.800 13702.982 - 13762.560: 94.9713% ( 23) 00:11:37.800 13762.560 - 13822.138: 95.2227% ( 28) 00:11:37.800 13822.138 - 13881.716: 95.3125% ( 10) 00:11:37.800 13881.716 - 13941.295: 95.3664% ( 6) 00:11:37.800 13941.295 - 14000.873: 95.4382% ( 8) 00:11:37.800 14000.873 - 14060.451: 95.5280% ( 10) 00:11:37.800 14060.451 - 14120.029: 95.5999% ( 8) 00:11:37.800 14120.029 - 14179.607: 95.6717% ( 8) 00:11:37.800 14179.607 - 14239.185: 95.7346% ( 7) 00:11:37.800 14239.185 - 14298.764: 95.8154% ( 9) 00:11:37.800 14298.764 - 14358.342: 95.8782% ( 7) 00:11:37.800 14358.342 - 14417.920: 95.9321% ( 6) 00:11:37.800 14417.920 - 14477.498: 95.9770% ( 5) 00:11:37.800 14596.655 - 14656.233: 96.0129% ( 4) 00:11:37.800 14656.233 - 14715.811: 96.0938% ( 9) 00:11:37.800 14715.811 - 14775.389: 96.1656% ( 8) 00:11:37.800 14775.389 - 14834.967: 96.2554% ( 10) 00:11:37.800 14834.967 - 14894.545: 96.3272% ( 8) 00:11:37.800 14894.545 - 14954.124: 96.4440% ( 13) 00:11:37.800 14954.124 - 15013.702: 96.6056% ( 18) 00:11:37.800 15013.702 - 15073.280: 96.7493% ( 16) 00:11:37.800 15073.280 - 15132.858: 96.8481% ( 11) 00:11:37.800 15132.858 - 15192.436: 96.9468% ( 11) 00:11:37.800 15192.436 - 15252.015: 97.0546% ( 12) 00:11:37.800 15252.015 - 15371.171: 97.3060% ( 28) 00:11:37.800 15371.171 - 15490.327: 97.5844% ( 31) 00:11:37.800 15490.327 - 15609.484: 97.9436% ( 40) 00:11:37.800 15609.484 - 15728.640: 98.2489% ( 34) 00:11:37.800 15728.640 - 15847.796: 98.4285% ( 20) 00:11:37.800 15847.796 - 15966.953: 98.5363% ( 12) 00:11:37.800 15966.953 - 16086.109: 98.6620% ( 14) 00:11:37.800 16086.109 - 16205.265: 98.7787% ( 13) 00:11:37.800 16205.265 - 16324.422: 98.8326% ( 6) 00:11:37.800 16324.422 - 16443.578: 98.8506% ( 2) 00:11:37.800 33840.407 - 34078.720: 98.8685% ( 2) 00:11:37.800 34078.720 - 34317.033: 98.9134% ( 5) 00:11:37.800 34317.033 - 34555.345: 98.9673% ( 6) 00:11:37.800 34555.345 - 34793.658: 99.0122% ( 5) 00:11:37.800 34793.658 - 35031.971: 99.0571% ( 5) 00:11:37.800 35031.971 - 35270.284: 99.1020% ( 5) 00:11:37.800 35270.284 - 35508.596: 99.1469% ( 5) 00:11:37.800 35508.596 - 35746.909: 99.2008% ( 6) 00:11:37.800 35746.909 - 35985.222: 99.2547% ( 6) 00:11:37.800 35985.222 - 36223.535: 99.2996% ( 5) 00:11:37.800 36223.535 - 36461.847: 99.3445% ( 5) 00:11:37.800 36461.847 - 36700.160: 99.3983% ( 6) 00:11:37.800 36700.160 - 36938.473: 99.4253% ( 3) 00:11:37.800 44326.167 - 44564.480: 99.4612% ( 4) 00:11:37.800 44564.480 - 44802.793: 99.5151% ( 6) 00:11:37.800 44802.793 - 45041.105: 99.5600% ( 5) 00:11:37.800 45041.105 - 45279.418: 99.6049% ( 5) 00:11:37.800 45279.418 - 45517.731: 99.6588% ( 6) 00:11:37.800 45517.731 - 45756.044: 99.6947% ( 4) 00:11:37.800 45756.044 - 45994.356: 99.7486% ( 6) 00:11:37.800 45994.356 - 46232.669: 99.7935% ( 5) 00:11:37.800 46232.669 - 46470.982: 99.8473% ( 6) 00:11:37.800 46470.982 - 46709.295: 99.8922% ( 5) 00:11:37.800 46709.295 - 46947.607: 99.9371% ( 5) 00:11:37.800 46947.607 - 47185.920: 99.9820% ( 5) 00:11:37.800 47185.920 - 47424.233: 100.0000% ( 2) 00:11:37.800 00:11:37.800 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:37.800 ============================================================================== 00:11:37.800 Range in us Cumulative IO count 00:11:37.800 9532.509 - 9592.087: 0.0269% ( 3) 00:11:37.800 9592.087 - 9651.665: 0.0539% ( 3) 00:11:37.800 9651.665 - 9711.244: 0.1167% ( 7) 00:11:37.800 9711.244 - 9770.822: 0.2514% ( 15) 00:11:37.800 9770.822 - 9830.400: 0.5388% ( 32) 00:11:37.800 9830.400 - 9889.978: 1.1584% ( 69) 00:11:37.800 9889.978 - 9949.556: 1.6792% ( 58) 00:11:37.800 9949.556 - 10009.135: 2.5413% ( 96) 00:11:37.800 10009.135 - 10068.713: 3.4483% ( 101) 00:11:37.800 10068.713 - 10128.291: 4.7593% ( 146) 00:11:37.800 10128.291 - 10187.869: 6.3039% ( 172) 00:11:37.800 10187.869 - 10247.447: 8.1807% ( 209) 00:11:37.800 10247.447 - 10307.025: 10.4167% ( 249) 00:11:37.800 10307.025 - 10366.604: 12.9131% ( 278) 00:11:37.800 10366.604 - 10426.182: 15.6160% ( 301) 00:11:37.800 10426.182 - 10485.760: 18.8937% ( 365) 00:11:37.800 10485.760 - 10545.338: 22.1624% ( 364) 00:11:37.801 10545.338 - 10604.916: 25.6466% ( 388) 00:11:37.801 10604.916 - 10664.495: 29.2565% ( 402) 00:11:37.801 10664.495 - 10724.073: 33.3244% ( 453) 00:11:37.801 10724.073 - 10783.651: 37.3384% ( 447) 00:11:37.801 10783.651 - 10843.229: 41.0111% ( 409) 00:11:37.801 10843.229 - 10902.807: 44.9264% ( 436) 00:11:37.801 10902.807 - 10962.385: 48.4555% ( 393) 00:11:37.801 10962.385 - 11021.964: 51.7241% ( 364) 00:11:37.801 11021.964 - 11081.542: 55.0108% ( 366) 00:11:37.801 11081.542 - 11141.120: 58.2166% ( 357) 00:11:37.801 11141.120 - 11200.698: 61.0722% ( 318) 00:11:37.801 11200.698 - 11260.276: 63.7841% ( 302) 00:11:37.801 11260.276 - 11319.855: 66.4062% ( 292) 00:11:37.801 11319.855 - 11379.433: 68.7949% ( 266) 00:11:37.801 11379.433 - 11439.011: 71.1835% ( 266) 00:11:37.801 11439.011 - 11498.589: 73.1412% ( 218) 00:11:37.801 11498.589 - 11558.167: 75.1616% ( 225) 00:11:37.801 11558.167 - 11617.745: 77.0744% ( 213) 00:11:37.801 11617.745 - 11677.324: 78.6189% ( 172) 00:11:37.801 11677.324 - 11736.902: 80.0018% ( 154) 00:11:37.801 11736.902 - 11796.480: 81.2410% ( 138) 00:11:37.801 11796.480 - 11856.058: 82.3725% ( 126) 00:11:37.801 11856.058 - 11915.636: 83.3603% ( 110) 00:11:37.801 11915.636 - 11975.215: 84.3481% ( 110) 00:11:37.801 11975.215 - 12034.793: 85.3089% ( 107) 00:11:37.801 12034.793 - 12094.371: 86.2787% ( 108) 00:11:37.801 12094.371 - 12153.949: 87.1498% ( 97) 00:11:37.801 12153.949 - 12213.527: 87.9041% ( 84) 00:11:37.801 12213.527 - 12273.105: 88.6315% ( 81) 00:11:37.801 12273.105 - 12332.684: 89.2062% ( 64) 00:11:37.801 12332.684 - 12392.262: 89.6013% ( 44) 00:11:37.801 12392.262 - 12451.840: 90.0144% ( 46) 00:11:37.801 12451.840 - 12511.418: 90.3825% ( 41) 00:11:37.801 12511.418 - 12570.996: 90.8046% ( 47) 00:11:37.801 12570.996 - 12630.575: 91.2716% ( 52) 00:11:37.801 12630.575 - 12690.153: 91.6128% ( 38) 00:11:37.801 12690.153 - 12749.731: 91.8912% ( 31) 00:11:37.801 12749.731 - 12809.309: 92.1246% ( 26) 00:11:37.801 12809.309 - 12868.887: 92.4210% ( 33) 00:11:37.801 12868.887 - 12928.465: 92.6634% ( 27) 00:11:37.801 12928.465 - 12988.044: 92.8969% ( 26) 00:11:37.801 12988.044 - 13047.622: 93.0765% ( 20) 00:11:37.801 13047.622 - 13107.200: 93.2561% ( 20) 00:11:37.801 13107.200 - 13166.778: 93.3998% ( 16) 00:11:37.801 13166.778 - 13226.356: 93.5794% ( 20) 00:11:37.801 13226.356 - 13285.935: 93.7231% ( 16) 00:11:37.801 13285.935 - 13345.513: 93.8578% ( 15) 00:11:37.801 13345.513 - 13405.091: 94.0643% ( 23) 00:11:37.801 13405.091 - 13464.669: 94.2349% ( 19) 00:11:37.801 13464.669 - 13524.247: 94.4055% ( 19) 00:11:37.801 13524.247 - 13583.825: 94.5672% ( 18) 00:11:37.801 13583.825 - 13643.404: 94.6839% ( 13) 00:11:37.801 13643.404 - 13702.982: 94.8006% ( 13) 00:11:37.801 13702.982 - 13762.560: 94.8815% ( 9) 00:11:37.801 13762.560 - 13822.138: 94.9982% ( 13) 00:11:37.801 13822.138 - 13881.716: 95.1060% ( 12) 00:11:37.801 13881.716 - 13941.295: 95.2137% ( 12) 00:11:37.801 13941.295 - 14000.873: 95.3664% ( 17) 00:11:37.801 14000.873 - 14060.451: 95.4562% ( 10) 00:11:37.801 14060.451 - 14120.029: 95.5101% ( 6) 00:11:37.801 14120.029 - 14179.607: 95.5639% ( 6) 00:11:37.801 14179.607 - 14239.185: 95.6088% ( 5) 00:11:37.801 14239.185 - 14298.764: 95.6627% ( 6) 00:11:37.801 14298.764 - 14358.342: 95.7076% ( 5) 00:11:37.801 14358.342 - 14417.920: 95.7435% ( 4) 00:11:37.801 14417.920 - 14477.498: 95.7974% ( 6) 00:11:37.801 14477.498 - 14537.076: 95.8423% ( 5) 00:11:37.801 14537.076 - 14596.655: 95.8962% ( 6) 00:11:37.801 14596.655 - 14656.233: 95.9591% ( 7) 00:11:37.801 14656.233 - 14715.811: 96.0578% ( 11) 00:11:37.801 14715.811 - 14775.389: 96.1925% ( 15) 00:11:37.801 14775.389 - 14834.967: 96.2733% ( 9) 00:11:37.801 14834.967 - 14894.545: 96.3272% ( 6) 00:11:37.801 14894.545 - 14954.124: 96.3721% ( 5) 00:11:37.801 14954.124 - 15013.702: 96.4170% ( 5) 00:11:37.801 15013.702 - 15073.280: 96.4978% ( 9) 00:11:37.801 15073.280 - 15132.858: 96.5966% ( 11) 00:11:37.801 15132.858 - 15192.436: 96.7044% ( 12) 00:11:37.801 15192.436 - 15252.015: 96.7942% ( 10) 00:11:37.801 15252.015 - 15371.171: 97.0456% ( 28) 00:11:37.801 15371.171 - 15490.327: 97.3779% ( 37) 00:11:37.801 15490.327 - 15609.484: 97.7101% ( 37) 00:11:37.801 15609.484 - 15728.640: 97.9885% ( 31) 00:11:37.801 15728.640 - 15847.796: 98.2130% ( 25) 00:11:37.801 15847.796 - 15966.953: 98.3477% ( 15) 00:11:37.801 15966.953 - 16086.109: 98.5004% ( 17) 00:11:37.801 16086.109 - 16205.265: 98.6171% ( 13) 00:11:37.801 16205.265 - 16324.422: 98.7159% ( 11) 00:11:37.801 16324.422 - 16443.578: 98.7877% ( 8) 00:11:37.801 16443.578 - 16562.735: 98.8326% ( 5) 00:11:37.801 16562.735 - 16681.891: 98.8506% ( 2) 00:11:37.801 32172.218 - 32410.531: 98.8775% ( 3) 00:11:37.801 32410.531 - 32648.844: 98.9134% ( 4) 00:11:37.801 32648.844 - 32887.156: 98.9583% ( 5) 00:11:37.801 32887.156 - 33125.469: 99.0032% ( 5) 00:11:37.801 33125.469 - 33363.782: 99.0392% ( 4) 00:11:37.801 33363.782 - 33602.095: 99.0841% ( 5) 00:11:37.801 33602.095 - 33840.407: 99.1290% ( 5) 00:11:37.801 33840.407 - 34078.720: 99.1828% ( 6) 00:11:37.801 34078.720 - 34317.033: 99.2277% ( 5) 00:11:37.801 34317.033 - 34555.345: 99.2726% ( 5) 00:11:37.801 34555.345 - 34793.658: 99.3175% ( 5) 00:11:37.801 34793.658 - 35031.971: 99.3624% ( 5) 00:11:37.801 35031.971 - 35270.284: 99.4163% ( 6) 00:11:37.801 35270.284 - 35508.596: 99.4253% ( 1) 00:11:37.801 41704.727 - 41943.040: 99.4792% ( 6) 00:11:37.801 41943.040 - 42181.353: 99.5420% ( 7) 00:11:37.801 42181.353 - 42419.665: 99.5869% ( 5) 00:11:37.801 42419.665 - 42657.978: 99.6408% ( 6) 00:11:37.801 42657.978 - 42896.291: 99.6947% ( 6) 00:11:37.801 42896.291 - 43134.604: 99.7575% ( 7) 00:11:37.801 43134.604 - 43372.916: 99.8114% ( 6) 00:11:37.801 43372.916 - 43611.229: 99.8653% ( 6) 00:11:37.801 43611.229 - 43849.542: 99.9192% ( 6) 00:11:37.801 43849.542 - 44087.855: 99.9551% ( 4) 00:11:37.801 44087.855 - 44326.167: 100.0000% ( 5) 00:11:37.801 00:11:37.801 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:37.801 ============================================================================== 00:11:37.801 Range in us Cumulative IO count 00:11:37.802 9353.775 - 9413.353: 0.0359% ( 4) 00:11:37.802 9413.353 - 9472.931: 0.0629% ( 3) 00:11:37.802 9472.931 - 9532.509: 0.1078% ( 5) 00:11:37.802 9532.509 - 9592.087: 0.1886% ( 9) 00:11:37.802 9592.087 - 9651.665: 0.5119% ( 36) 00:11:37.802 9651.665 - 9711.244: 0.9519% ( 49) 00:11:37.802 9711.244 - 9770.822: 1.5356% ( 65) 00:11:37.802 9770.822 - 9830.400: 2.4695% ( 104) 00:11:37.802 9830.400 - 9889.978: 3.3854% ( 102) 00:11:37.802 9889.978 - 9949.556: 4.1846% ( 89) 00:11:37.802 9949.556 - 10009.135: 5.5136% ( 148) 00:11:37.802 10009.135 - 10068.713: 7.3455% ( 204) 00:11:37.802 10068.713 - 10128.291: 9.1954% ( 206) 00:11:37.802 10128.291 - 10187.869: 11.1620% ( 219) 00:11:37.802 10187.869 - 10247.447: 13.2543% ( 233) 00:11:37.802 10247.447 - 10307.025: 15.1401% ( 210) 00:11:37.802 10307.025 - 10366.604: 17.7353% ( 289) 00:11:37.802 10366.604 - 10426.182: 20.3394% ( 290) 00:11:37.802 10426.182 - 10485.760: 22.9975% ( 296) 00:11:37.802 10485.760 - 10545.338: 25.7723% ( 309) 00:11:37.802 10545.338 - 10604.916: 28.5381% ( 308) 00:11:37.802 10604.916 - 10664.495: 31.1602% ( 292) 00:11:37.802 10664.495 - 10724.073: 34.0787% ( 325) 00:11:37.802 10724.073 - 10783.651: 37.3473% ( 364) 00:11:37.802 10783.651 - 10843.229: 40.1670% ( 314) 00:11:37.802 10843.229 - 10902.807: 43.0047% ( 316) 00:11:37.802 10902.807 - 10962.385: 45.9501% ( 328) 00:11:37.802 10962.385 - 11021.964: 49.1739% ( 359) 00:11:37.802 11021.964 - 11081.542: 52.0744% ( 323) 00:11:37.802 11081.542 - 11141.120: 55.1185% ( 339) 00:11:37.802 11141.120 - 11200.698: 58.0819% ( 330) 00:11:37.802 11200.698 - 11260.276: 61.1889% ( 346) 00:11:37.802 11260.276 - 11319.855: 63.8919% ( 301) 00:11:37.802 11319.855 - 11379.433: 66.4601% ( 286) 00:11:37.802 11379.433 - 11439.011: 68.6782% ( 247) 00:11:37.802 11439.011 - 11498.589: 70.8782% ( 245) 00:11:37.802 11498.589 - 11558.167: 73.0873% ( 246) 00:11:37.802 11558.167 - 11617.745: 75.2065% ( 236) 00:11:37.802 11617.745 - 11677.324: 76.8768% ( 186) 00:11:37.802 11677.324 - 11736.902: 78.3675% ( 166) 00:11:37.802 11736.902 - 11796.480: 79.8671% ( 167) 00:11:37.802 11796.480 - 11856.058: 81.0884% ( 136) 00:11:37.802 11856.058 - 11915.636: 82.2917% ( 134) 00:11:37.802 11915.636 - 11975.215: 83.4501% ( 129) 00:11:37.802 11975.215 - 12034.793: 84.5456% ( 122) 00:11:37.802 12034.793 - 12094.371: 85.3987% ( 95) 00:11:37.802 12094.371 - 12153.949: 86.2698% ( 97) 00:11:37.802 12153.949 - 12213.527: 87.1767% ( 101) 00:11:37.802 12213.527 - 12273.105: 87.8951% ( 80) 00:11:37.802 12273.105 - 12332.684: 88.6135% ( 80) 00:11:37.802 12332.684 - 12392.262: 89.1613% ( 61) 00:11:37.802 12392.262 - 12451.840: 89.6821% ( 58) 00:11:37.802 12451.840 - 12511.418: 90.1311% ( 50) 00:11:37.802 12511.418 - 12570.996: 90.5172% ( 43) 00:11:37.802 12570.996 - 12630.575: 90.8315% ( 35) 00:11:37.802 12630.575 - 12690.153: 91.1548% ( 36) 00:11:37.802 12690.153 - 12749.731: 91.4960% ( 38) 00:11:37.802 12749.731 - 12809.309: 91.7744% ( 31) 00:11:37.802 12809.309 - 12868.887: 92.0708% ( 33) 00:11:37.802 12868.887 - 12928.465: 92.2593% ( 21) 00:11:37.802 12928.465 - 12988.044: 92.4749% ( 24) 00:11:37.802 12988.044 - 13047.622: 92.6365% ( 18) 00:11:37.802 13047.622 - 13107.200: 92.8161% ( 20) 00:11:37.802 13107.200 - 13166.778: 92.9777% ( 18) 00:11:37.802 13166.778 - 13226.356: 93.1124% ( 15) 00:11:37.802 13226.356 - 13285.935: 93.2651% ( 17) 00:11:37.802 13285.935 - 13345.513: 93.4447% ( 20) 00:11:37.802 13345.513 - 13405.091: 93.5704% ( 14) 00:11:37.802 13405.091 - 13464.669: 93.7231% ( 17) 00:11:37.802 13464.669 - 13524.247: 93.8667% ( 16) 00:11:37.802 13524.247 - 13583.825: 94.0014% ( 15) 00:11:37.802 13583.825 - 13643.404: 94.1272% ( 14) 00:11:37.802 13643.404 - 13702.982: 94.2259% ( 11) 00:11:37.802 13702.982 - 13762.560: 94.3606% ( 15) 00:11:37.802 13762.560 - 13822.138: 94.4864% ( 14) 00:11:37.802 13822.138 - 13881.716: 94.6031% ( 13) 00:11:37.802 13881.716 - 13941.295: 94.7108% ( 12) 00:11:37.802 13941.295 - 14000.873: 94.8366% ( 14) 00:11:37.802 14000.873 - 14060.451: 94.9353% ( 11) 00:11:37.802 14060.451 - 14120.029: 95.0521% ( 13) 00:11:37.802 14120.029 - 14179.607: 95.1598% ( 12) 00:11:37.802 14179.607 - 14239.185: 95.3035% ( 16) 00:11:37.802 14239.185 - 14298.764: 95.4382% ( 15) 00:11:37.802 14298.764 - 14358.342: 95.5190% ( 9) 00:11:37.802 14358.342 - 14417.920: 95.6537% ( 15) 00:11:37.802 14417.920 - 14477.498: 95.7346% ( 9) 00:11:37.802 14477.498 - 14537.076: 95.8154% ( 9) 00:11:37.802 14537.076 - 14596.655: 95.8782% ( 7) 00:11:37.802 14596.655 - 14656.233: 95.9411% ( 7) 00:11:37.802 14656.233 - 14715.811: 96.0489% ( 12) 00:11:37.802 14715.811 - 14775.389: 96.1746% ( 14) 00:11:37.802 14775.389 - 14834.967: 96.2913% ( 13) 00:11:37.802 14834.967 - 14894.545: 96.3901% ( 11) 00:11:37.802 14894.545 - 14954.124: 96.4799% ( 10) 00:11:37.802 14954.124 - 15013.702: 96.5517% ( 8) 00:11:37.802 15013.702 - 15073.280: 96.6595% ( 12) 00:11:37.802 15073.280 - 15132.858: 96.7313% ( 8) 00:11:37.802 15132.858 - 15192.436: 96.8481% ( 13) 00:11:37.802 15192.436 - 15252.015: 97.0366% ( 21) 00:11:37.802 15252.015 - 15371.171: 97.2791% ( 27) 00:11:37.802 15371.171 - 15490.327: 97.5036% ( 25) 00:11:37.802 15490.327 - 15609.484: 97.7191% ( 24) 00:11:37.802 15609.484 - 15728.640: 97.8807% ( 18) 00:11:37.802 15728.640 - 15847.796: 98.0514% ( 19) 00:11:37.802 15847.796 - 15966.953: 98.2489% ( 22) 00:11:37.802 15966.953 - 16086.109: 98.4285% ( 20) 00:11:37.802 16086.109 - 16205.265: 98.5453% ( 13) 00:11:37.802 16205.265 - 16324.422: 98.6440% ( 11) 00:11:37.802 16324.422 - 16443.578: 98.7249% ( 9) 00:11:37.802 16443.578 - 16562.735: 98.8057% ( 9) 00:11:37.802 16562.735 - 16681.891: 98.8506% ( 5) 00:11:37.802 29312.465 - 29431.622: 98.8775% ( 3) 00:11:37.802 29431.622 - 29550.778: 98.9045% ( 3) 00:11:37.802 29550.778 - 29669.935: 98.9134% ( 1) 00:11:37.802 29669.935 - 29789.091: 98.9404% ( 3) 00:11:37.802 29789.091 - 29908.247: 98.9583% ( 2) 00:11:37.802 29908.247 - 30027.404: 98.9763% ( 2) 00:11:37.802 30027.404 - 30146.560: 98.9943% ( 2) 00:11:37.802 30146.560 - 30265.716: 99.0122% ( 2) 00:11:37.802 30265.716 - 30384.873: 99.0302% ( 2) 00:11:37.802 30384.873 - 30504.029: 99.0481% ( 2) 00:11:37.803 30504.029 - 30742.342: 99.0930% ( 5) 00:11:37.803 30742.342 - 30980.655: 99.1379% ( 5) 00:11:37.803 30980.655 - 31218.967: 99.1739% ( 4) 00:11:37.803 31218.967 - 31457.280: 99.2188% ( 5) 00:11:37.803 31457.280 - 31695.593: 99.2547% ( 4) 00:11:37.803 31695.593 - 31933.905: 99.2906% ( 4) 00:11:37.803 31933.905 - 32172.218: 99.3445% ( 6) 00:11:37.803 32172.218 - 32410.531: 99.3804% ( 4) 00:11:37.803 32410.531 - 32648.844: 99.4253% ( 5) 00:11:37.803 38368.349 - 38606.662: 99.4612% ( 4) 00:11:37.803 38606.662 - 38844.975: 99.5061% ( 5) 00:11:37.803 38844.975 - 39083.287: 99.5600% ( 6) 00:11:37.803 39083.287 - 39321.600: 99.5869% ( 3) 00:11:37.803 39321.600 - 39559.913: 99.6408% ( 6) 00:11:37.803 39559.913 - 39798.225: 99.6947% ( 6) 00:11:37.803 39798.225 - 40036.538: 99.7396% ( 5) 00:11:37.803 40036.538 - 40274.851: 99.7845% ( 5) 00:11:37.803 40274.851 - 40513.164: 99.8384% ( 6) 00:11:37.803 40513.164 - 40751.476: 99.8833% ( 5) 00:11:37.803 40751.476 - 40989.789: 99.9282% ( 5) 00:11:37.803 40989.789 - 41228.102: 99.9820% ( 6) 00:11:37.803 41228.102 - 41466.415: 100.0000% ( 2) 00:11:37.803 00:11:37.803 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:37.803 ============================================================================== 00:11:37.803 Range in us Cumulative IO count 00:11:37.803 9234.618 - 9294.196: 0.0090% ( 1) 00:11:37.803 9472.931 - 9532.509: 0.0180% ( 1) 00:11:37.803 9532.509 - 9592.087: 0.0359% ( 2) 00:11:37.803 9592.087 - 9651.665: 0.0718% ( 4) 00:11:37.803 9651.665 - 9711.244: 0.1078% ( 4) 00:11:37.803 9711.244 - 9770.822: 0.2335% ( 14) 00:11:37.803 9770.822 - 9830.400: 0.4580% ( 25) 00:11:37.803 9830.400 - 9889.978: 0.7274% ( 30) 00:11:37.803 9889.978 - 9949.556: 1.1853% ( 51) 00:11:37.803 9949.556 - 10009.135: 1.8319% ( 72) 00:11:37.803 10009.135 - 10068.713: 2.7389% ( 101) 00:11:37.803 10068.713 - 10128.291: 3.9152% ( 131) 00:11:37.803 10128.291 - 10187.869: 5.3071% ( 155) 00:11:37.803 10187.869 - 10247.447: 7.2468% ( 216) 00:11:37.803 10247.447 - 10307.025: 9.4828% ( 249) 00:11:37.803 10307.025 - 10366.604: 12.4461% ( 330) 00:11:37.803 10366.604 - 10426.182: 15.3736% ( 326) 00:11:37.803 10426.182 - 10485.760: 18.4447% ( 342) 00:11:37.803 10485.760 - 10545.338: 21.9109% ( 386) 00:11:37.803 10545.338 - 10604.916: 25.5298% ( 403) 00:11:37.803 10604.916 - 10664.495: 28.9871% ( 385) 00:11:37.803 10664.495 - 10724.073: 32.6419% ( 407) 00:11:37.803 10724.073 - 10783.651: 36.8624% ( 470) 00:11:37.803 10783.651 - 10843.229: 41.0560% ( 467) 00:11:37.803 10843.229 - 10902.807: 45.1060% ( 451) 00:11:37.803 10902.807 - 10962.385: 48.6800% ( 398) 00:11:37.803 10962.385 - 11021.964: 52.4784% ( 423) 00:11:37.803 11021.964 - 11081.542: 55.6304% ( 351) 00:11:37.803 11081.542 - 11141.120: 58.5848% ( 329) 00:11:37.803 11141.120 - 11200.698: 61.2338% ( 295) 00:11:37.803 11200.698 - 11260.276: 63.9817% ( 306) 00:11:37.803 11260.276 - 11319.855: 66.4601% ( 276) 00:11:37.803 11319.855 - 11379.433: 68.9476% ( 277) 00:11:37.803 11379.433 - 11439.011: 71.0219% ( 231) 00:11:37.803 11439.011 - 11498.589: 72.9436% ( 214) 00:11:37.803 11498.589 - 11558.167: 74.9641% ( 225) 00:11:37.803 11558.167 - 11617.745: 76.5356% ( 175) 00:11:37.803 11617.745 - 11677.324: 78.0981% ( 174) 00:11:37.803 11677.324 - 11736.902: 79.6695% ( 175) 00:11:37.803 11736.902 - 11796.480: 81.2500% ( 176) 00:11:37.803 11796.480 - 11856.058: 82.6598% ( 157) 00:11:37.803 11856.058 - 11915.636: 83.8093% ( 128) 00:11:37.803 11915.636 - 11975.215: 84.9318% ( 125) 00:11:37.803 11975.215 - 12034.793: 85.8836% ( 106) 00:11:37.803 12034.793 - 12094.371: 86.7726% ( 99) 00:11:37.803 12094.371 - 12153.949: 87.5898% ( 91) 00:11:37.803 12153.949 - 12213.527: 88.1196% ( 59) 00:11:37.803 12213.527 - 12273.105: 88.6315% ( 57) 00:11:37.803 12273.105 - 12332.684: 89.0894% ( 51) 00:11:37.803 12332.684 - 12392.262: 89.4846% ( 44) 00:11:37.803 12392.262 - 12451.840: 89.8797% ( 44) 00:11:37.803 12451.840 - 12511.418: 90.2478% ( 41) 00:11:37.803 12511.418 - 12570.996: 90.6250% ( 42) 00:11:37.803 12570.996 - 12630.575: 90.9842% ( 40) 00:11:37.803 12630.575 - 12690.153: 91.2985% ( 35) 00:11:37.803 12690.153 - 12749.731: 91.5858% ( 32) 00:11:37.803 12749.731 - 12809.309: 91.8103% ( 25) 00:11:37.803 12809.309 - 12868.887: 92.0438% ( 26) 00:11:37.803 12868.887 - 12928.465: 92.2773% ( 26) 00:11:37.803 12928.465 - 12988.044: 92.4210% ( 16) 00:11:37.803 12988.044 - 13047.622: 92.5826% ( 18) 00:11:37.803 13047.622 - 13107.200: 92.7712% ( 21) 00:11:37.803 13107.200 - 13166.778: 92.9777% ( 23) 00:11:37.803 13166.778 - 13226.356: 93.2561% ( 31) 00:11:37.803 13226.356 - 13285.935: 93.4716% ( 24) 00:11:37.803 13285.935 - 13345.513: 93.7051% ( 26) 00:11:37.803 13345.513 - 13405.091: 93.9476% ( 27) 00:11:37.803 13405.091 - 13464.669: 94.1092% ( 18) 00:11:37.803 13464.669 - 13524.247: 94.2349% ( 14) 00:11:37.803 13524.247 - 13583.825: 94.3427% ( 12) 00:11:37.803 13583.825 - 13643.404: 94.4325% ( 10) 00:11:37.803 13643.404 - 13702.982: 94.5312% ( 11) 00:11:37.803 13702.982 - 13762.560: 94.6300% ( 11) 00:11:37.803 13762.560 - 13822.138: 94.7288% ( 11) 00:11:37.803 13822.138 - 13881.716: 94.8366% ( 12) 00:11:37.803 13881.716 - 13941.295: 94.9443% ( 12) 00:11:37.803 13941.295 - 14000.873: 95.0072% ( 7) 00:11:37.803 14000.873 - 14060.451: 95.0700% ( 7) 00:11:37.803 14060.451 - 14120.029: 95.1868% ( 13) 00:11:37.803 14120.029 - 14179.607: 95.3125% ( 14) 00:11:37.803 14179.607 - 14239.185: 95.4652% ( 17) 00:11:37.803 14239.185 - 14298.764: 95.5909% ( 14) 00:11:37.803 14298.764 - 14358.342: 95.6986% ( 12) 00:11:37.803 14358.342 - 14417.920: 95.7795% ( 9) 00:11:37.803 14417.920 - 14477.498: 95.9142% ( 15) 00:11:37.803 14477.498 - 14537.076: 96.1297% ( 24) 00:11:37.803 14537.076 - 14596.655: 96.2374% ( 12) 00:11:37.803 14596.655 - 14656.233: 96.3631% ( 14) 00:11:37.803 14656.233 - 14715.811: 96.5427% ( 20) 00:11:37.803 14715.811 - 14775.389: 96.6415% ( 11) 00:11:37.803 14775.389 - 14834.967: 96.7134% ( 8) 00:11:37.803 14834.967 - 14894.545: 96.7852% ( 8) 00:11:37.803 14894.545 - 14954.124: 96.8660% ( 9) 00:11:37.803 14954.124 - 15013.702: 96.9289% ( 7) 00:11:37.804 15013.702 - 15073.280: 97.0007% ( 8) 00:11:37.804 15073.280 - 15132.858: 97.0636% ( 7) 00:11:37.804 15132.858 - 15192.436: 97.1534% ( 10) 00:11:37.804 15192.436 - 15252.015: 97.2432% ( 10) 00:11:37.804 15252.015 - 15371.171: 97.5216% ( 31) 00:11:37.804 15371.171 - 15490.327: 97.7191% ( 22) 00:11:37.804 15490.327 - 15609.484: 97.9077% ( 21) 00:11:37.804 15609.484 - 15728.640: 98.1412% ( 26) 00:11:37.804 15728.640 - 15847.796: 98.2848% ( 16) 00:11:37.804 15847.796 - 15966.953: 98.4285% ( 16) 00:11:37.804 15966.953 - 16086.109: 98.5902% ( 18) 00:11:37.804 16086.109 - 16205.265: 98.6889% ( 11) 00:11:37.804 16205.265 - 16324.422: 98.7428% ( 6) 00:11:37.804 16324.422 - 16443.578: 98.7967% ( 6) 00:11:37.804 16443.578 - 16562.735: 98.8506% ( 6) 00:11:37.804 26691.025 - 26810.182: 98.8596% ( 1) 00:11:37.804 26810.182 - 26929.338: 98.8775% ( 2) 00:11:37.804 26929.338 - 27048.495: 98.9045% ( 3) 00:11:37.804 27048.495 - 27167.651: 98.9314% ( 3) 00:11:37.804 27167.651 - 27286.807: 98.9583% ( 3) 00:11:37.804 27286.807 - 27405.964: 98.9763% ( 2) 00:11:37.804 27405.964 - 27525.120: 99.0032% ( 3) 00:11:37.804 27525.120 - 27644.276: 99.0212% ( 2) 00:11:37.804 27644.276 - 27763.433: 99.0481% ( 3) 00:11:37.804 27763.433 - 27882.589: 99.0751% ( 3) 00:11:37.804 27882.589 - 28001.745: 99.0930% ( 2) 00:11:37.804 28001.745 - 28120.902: 99.1200% ( 3) 00:11:37.804 28120.902 - 28240.058: 99.1379% ( 2) 00:11:37.804 28240.058 - 28359.215: 99.1649% ( 3) 00:11:37.804 28359.215 - 28478.371: 99.1828% ( 2) 00:11:37.804 28478.371 - 28597.527: 99.2008% ( 2) 00:11:37.804 28597.527 - 28716.684: 99.2277% ( 3) 00:11:37.804 28716.684 - 28835.840: 99.2547% ( 3) 00:11:37.804 28835.840 - 28954.996: 99.2636% ( 1) 00:11:37.804 28954.996 - 29074.153: 99.2906% ( 3) 00:11:37.804 29074.153 - 29193.309: 99.3085% ( 2) 00:11:37.804 29193.309 - 29312.465: 99.3355% ( 3) 00:11:37.804 29312.465 - 29431.622: 99.3534% ( 2) 00:11:37.804 29431.622 - 29550.778: 99.3804% ( 3) 00:11:37.804 29550.778 - 29669.935: 99.3983% ( 2) 00:11:37.804 29669.935 - 29789.091: 99.4253% ( 3) 00:11:37.804 35031.971 - 35270.284: 99.4343% ( 1) 00:11:37.804 35270.284 - 35508.596: 99.4881% ( 6) 00:11:37.804 35508.596 - 35746.909: 99.5420% ( 6) 00:11:37.804 35746.909 - 35985.222: 99.5869% ( 5) 00:11:37.804 35985.222 - 36223.535: 99.6408% ( 6) 00:11:37.804 36223.535 - 36461.847: 99.6947% ( 6) 00:11:37.804 36461.847 - 36700.160: 99.7486% ( 6) 00:11:37.804 36700.160 - 36938.473: 99.8024% ( 6) 00:11:37.804 36938.473 - 37176.785: 99.8653% ( 7) 00:11:37.804 37176.785 - 37415.098: 99.9102% ( 5) 00:11:37.804 37415.098 - 37653.411: 99.9641% ( 6) 00:11:37.804 37653.411 - 37891.724: 100.0000% ( 4) 00:11:37.804 00:11:37.804 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:37.804 ============================================================================== 00:11:37.804 Range in us Cumulative IO count 00:11:37.804 9175.040 - 9234.618: 0.0090% ( 1) 00:11:37.804 9353.775 - 9413.353: 0.0180% ( 1) 00:11:37.804 9413.353 - 9472.931: 0.0539% ( 4) 00:11:37.804 9472.931 - 9532.509: 0.0898% ( 4) 00:11:37.804 9532.509 - 9592.087: 0.1347% ( 5) 00:11:37.804 9592.087 - 9651.665: 0.1706% ( 4) 00:11:37.804 9651.665 - 9711.244: 0.2155% ( 5) 00:11:37.804 9711.244 - 9770.822: 0.3323% ( 13) 00:11:37.804 9770.822 - 9830.400: 0.5657% ( 26) 00:11:37.804 9830.400 - 9889.978: 0.9159% ( 39) 00:11:37.804 9889.978 - 9949.556: 1.5445% ( 70) 00:11:37.804 9949.556 - 10009.135: 2.3527% ( 90) 00:11:37.804 10009.135 - 10068.713: 3.6907% ( 149) 00:11:37.804 10068.713 - 10128.291: 4.9479% ( 140) 00:11:37.804 10128.291 - 10187.869: 6.3667% ( 158) 00:11:37.804 10187.869 - 10247.447: 8.0909% ( 192) 00:11:37.804 10247.447 - 10307.025: 10.2999% ( 246) 00:11:37.804 10307.025 - 10366.604: 13.0478% ( 306) 00:11:37.804 10366.604 - 10426.182: 15.9573% ( 324) 00:11:37.804 10426.182 - 10485.760: 19.0643% ( 346) 00:11:37.804 10485.760 - 10545.338: 22.6473% ( 399) 00:11:37.804 10545.338 - 10604.916: 25.8800% ( 360) 00:11:37.804 10604.916 - 10664.495: 29.5348% ( 407) 00:11:37.804 10664.495 - 10724.073: 33.2795% ( 417) 00:11:37.804 10724.073 - 10783.651: 36.5930% ( 369) 00:11:37.804 10783.651 - 10843.229: 40.2209% ( 404) 00:11:37.804 10843.229 - 10902.807: 43.8218% ( 401) 00:11:37.804 10902.807 - 10962.385: 47.3958% ( 398) 00:11:37.804 10962.385 - 11021.964: 50.8441% ( 384) 00:11:37.804 11021.964 - 11081.542: 54.0679% ( 359) 00:11:37.804 11081.542 - 11141.120: 57.0043% ( 327) 00:11:37.804 11141.120 - 11200.698: 59.9318% ( 326) 00:11:37.804 11200.698 - 11260.276: 62.5359% ( 290) 00:11:37.804 11260.276 - 11319.855: 64.9605% ( 270) 00:11:37.804 11319.855 - 11379.433: 67.3042% ( 261) 00:11:37.804 11379.433 - 11439.011: 69.8455% ( 283) 00:11:37.804 11439.011 - 11498.589: 72.3330% ( 277) 00:11:37.804 11498.589 - 11558.167: 74.6857% ( 262) 00:11:37.804 11558.167 - 11617.745: 76.6343% ( 217) 00:11:37.804 11617.745 - 11677.324: 78.5201% ( 210) 00:11:37.804 11677.324 - 11736.902: 80.2173% ( 189) 00:11:37.804 11736.902 - 11796.480: 81.6631% ( 161) 00:11:37.804 11796.480 - 11856.058: 82.8394% ( 131) 00:11:37.804 11856.058 - 11915.636: 83.8991% ( 118) 00:11:37.804 11915.636 - 11975.215: 84.8689% ( 108) 00:11:37.804 11975.215 - 12034.793: 85.7220% ( 95) 00:11:37.804 12034.793 - 12094.371: 86.4134% ( 77) 00:11:37.804 12094.371 - 12153.949: 86.9881% ( 64) 00:11:37.804 12153.949 - 12213.527: 87.6167% ( 70) 00:11:37.804 12213.527 - 12273.105: 88.2543% ( 71) 00:11:37.804 12273.105 - 12332.684: 88.8021% ( 61) 00:11:37.804 12332.684 - 12392.262: 89.2331% ( 48) 00:11:37.804 12392.262 - 12451.840: 89.6552% ( 47) 00:11:37.804 12451.840 - 12511.418: 90.0054% ( 39) 00:11:37.804 12511.418 - 12570.996: 90.3556% ( 39) 00:11:37.804 12570.996 - 12630.575: 90.5621% ( 23) 00:11:37.804 12630.575 - 12690.153: 90.9034% ( 38) 00:11:37.804 12690.153 - 12749.731: 91.2626% ( 40) 00:11:37.804 12749.731 - 12809.309: 91.5769% ( 35) 00:11:37.804 12809.309 - 12868.887: 91.9810% ( 45) 00:11:37.804 12868.887 - 12928.465: 92.2953% ( 35) 00:11:37.804 12928.465 - 12988.044: 92.5198% ( 25) 00:11:37.804 12988.044 - 13047.622: 92.7892% ( 30) 00:11:37.804 13047.622 - 13107.200: 93.0226% ( 26) 00:11:37.805 13107.200 - 13166.778: 93.2561% ( 26) 00:11:37.805 13166.778 - 13226.356: 93.4357% ( 20) 00:11:37.805 13226.356 - 13285.935: 93.6243% ( 21) 00:11:37.805 13285.935 - 13345.513: 93.7859% ( 18) 00:11:37.805 13345.513 - 13405.091: 93.9655% ( 20) 00:11:37.805 13405.091 - 13464.669: 94.1451% ( 20) 00:11:37.805 13464.669 - 13524.247: 94.3068% ( 18) 00:11:37.805 13524.247 - 13583.825: 94.4684% ( 18) 00:11:37.805 13583.825 - 13643.404: 94.6659% ( 22) 00:11:37.805 13643.404 - 13702.982: 94.8366% ( 19) 00:11:37.805 13702.982 - 13762.560: 95.0072% ( 19) 00:11:37.805 13762.560 - 13822.138: 95.1598% ( 17) 00:11:37.805 13822.138 - 13881.716: 95.3215% ( 18) 00:11:37.805 13881.716 - 13941.295: 95.4292% ( 12) 00:11:37.805 13941.295 - 14000.873: 95.5101% ( 9) 00:11:37.805 14000.873 - 14060.451: 95.5909% ( 9) 00:11:37.805 14060.451 - 14120.029: 95.6807% ( 10) 00:11:37.805 14120.029 - 14179.607: 95.7705% ( 10) 00:11:37.805 14179.607 - 14239.185: 95.8333% ( 7) 00:11:37.805 14239.185 - 14298.764: 95.9052% ( 8) 00:11:37.805 14298.764 - 14358.342: 95.9680% ( 7) 00:11:37.805 14358.342 - 14417.920: 96.0309% ( 7) 00:11:37.805 14417.920 - 14477.498: 96.1746% ( 16) 00:11:37.805 14477.498 - 14537.076: 96.3272% ( 17) 00:11:37.805 14537.076 - 14596.655: 96.4529% ( 14) 00:11:37.805 14596.655 - 14656.233: 96.5517% ( 11) 00:11:37.805 14656.233 - 14715.811: 96.6685% ( 13) 00:11:37.805 14715.811 - 14775.389: 96.7583% ( 10) 00:11:37.805 14775.389 - 14834.967: 96.8391% ( 9) 00:11:37.805 14834.967 - 14894.545: 96.9468% ( 12) 00:11:37.805 14894.545 - 14954.124: 97.0546% ( 12) 00:11:37.805 14954.124 - 15013.702: 97.1624% ( 12) 00:11:37.805 15013.702 - 15073.280: 97.2522% ( 10) 00:11:37.805 15073.280 - 15132.858: 97.3420% ( 10) 00:11:37.805 15132.858 - 15192.436: 97.4407% ( 11) 00:11:37.805 15192.436 - 15252.015: 97.5305% ( 10) 00:11:37.805 15252.015 - 15371.171: 97.7011% ( 19) 00:11:37.805 15371.171 - 15490.327: 97.9616% ( 29) 00:11:37.805 15490.327 - 15609.484: 98.2759% ( 35) 00:11:37.805 15609.484 - 15728.640: 98.4195% ( 16) 00:11:37.805 15728.640 - 15847.796: 98.5183% ( 11) 00:11:37.805 15847.796 - 15966.953: 98.5991% ( 9) 00:11:37.805 15966.953 - 16086.109: 98.6620% ( 7) 00:11:37.805 16086.109 - 16205.265: 98.7159% ( 6) 00:11:37.805 16205.265 - 16324.422: 98.7698% ( 6) 00:11:37.805 16324.422 - 16443.578: 98.8236% ( 6) 00:11:37.805 16443.578 - 16562.735: 98.8506% ( 3) 00:11:37.805 25022.836 - 25141.993: 98.8865% ( 4) 00:11:37.805 25141.993 - 25261.149: 98.9314% ( 5) 00:11:37.805 25261.149 - 25380.305: 98.9673% ( 4) 00:11:37.805 25380.305 - 25499.462: 98.9853% ( 2) 00:11:37.805 25499.462 - 25618.618: 99.0392% ( 6) 00:11:37.805 25618.618 - 25737.775: 99.0751% ( 4) 00:11:37.805 25737.775 - 25856.931: 99.1020% ( 3) 00:11:37.805 25856.931 - 25976.087: 99.1200% ( 2) 00:11:37.805 25976.087 - 26095.244: 99.1469% ( 3) 00:11:37.805 26095.244 - 26214.400: 99.1649% ( 2) 00:11:37.805 26214.400 - 26333.556: 99.1828% ( 2) 00:11:37.805 26333.556 - 26452.713: 99.2008% ( 2) 00:11:37.805 26452.713 - 26571.869: 99.2188% ( 2) 00:11:37.805 26571.869 - 26691.025: 99.2457% ( 3) 00:11:37.805 26691.025 - 26810.182: 99.2636% ( 2) 00:11:37.805 26810.182 - 26929.338: 99.2816% ( 2) 00:11:37.805 26929.338 - 27048.495: 99.3085% ( 3) 00:11:37.805 27048.495 - 27167.651: 99.3355% ( 3) 00:11:37.805 27167.651 - 27286.807: 99.3534% ( 2) 00:11:37.805 27286.807 - 27405.964: 99.3714% ( 2) 00:11:37.805 27405.964 - 27525.120: 99.3983% ( 3) 00:11:37.805 27525.120 - 27644.276: 99.4253% ( 3) 00:11:37.805 30742.342 - 30980.655: 99.5061% ( 9) 00:11:37.805 32648.844 - 32887.156: 99.5330% ( 3) 00:11:37.805 32887.156 - 33125.469: 99.5690% ( 4) 00:11:37.805 33125.469 - 33363.782: 99.6318% ( 7) 00:11:37.805 33363.782 - 33602.095: 99.6677% ( 4) 00:11:37.805 33602.095 - 33840.407: 99.7306% ( 7) 00:11:37.805 33840.407 - 34078.720: 99.7755% ( 5) 00:11:37.805 34078.720 - 34317.033: 99.8294% ( 6) 00:11:37.805 34317.033 - 34555.345: 99.8833% ( 6) 00:11:37.805 34555.345 - 34793.658: 99.9282% ( 5) 00:11:37.805 34793.658 - 35031.971: 99.9820% ( 6) 00:11:37.805 35031.971 - 35270.284: 100.0000% ( 2) 00:11:37.805 00:11:37.805 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:37.805 ============================================================================== 00:11:37.805 Range in us Cumulative IO count 00:11:37.805 9294.196 - 9353.775: 0.0090% ( 1) 00:11:37.805 9592.087 - 9651.665: 0.0629% ( 6) 00:11:37.805 9651.665 - 9711.244: 0.1527% ( 10) 00:11:37.805 9711.244 - 9770.822: 0.3502% ( 22) 00:11:37.805 9770.822 - 9830.400: 0.6555% ( 34) 00:11:37.805 9830.400 - 9889.978: 0.9878% ( 37) 00:11:37.805 9889.978 - 9949.556: 1.8499% ( 96) 00:11:37.805 9949.556 - 10009.135: 2.6580% ( 90) 00:11:37.805 10009.135 - 10068.713: 3.5920% ( 104) 00:11:37.805 10068.713 - 10128.291: 4.7953% ( 134) 00:11:37.805 10128.291 - 10187.869: 6.4745% ( 187) 00:11:37.805 10187.869 - 10247.447: 8.5578% ( 232) 00:11:37.805 10247.447 - 10307.025: 10.7848% ( 248) 00:11:37.805 10307.025 - 10366.604: 13.5866% ( 312) 00:11:37.805 10366.604 - 10426.182: 16.2895% ( 301) 00:11:37.805 10426.182 - 10485.760: 19.1182% ( 315) 00:11:37.805 10485.760 - 10545.338: 22.4767% ( 374) 00:11:37.805 10545.338 - 10604.916: 25.9519% ( 387) 00:11:37.805 10604.916 - 10664.495: 29.2834% ( 371) 00:11:37.805 10664.495 - 10724.073: 32.8125% ( 393) 00:11:37.805 10724.073 - 10783.651: 36.7906% ( 443) 00:11:37.805 10783.651 - 10843.229: 40.6789% ( 433) 00:11:37.805 10843.229 - 10902.807: 44.5851% ( 435) 00:11:37.805 10902.807 - 10962.385: 48.1322% ( 395) 00:11:37.805 10962.385 - 11021.964: 51.5984% ( 386) 00:11:37.805 11021.964 - 11081.542: 54.8761% ( 365) 00:11:37.805 11081.542 - 11141.120: 57.8484% ( 331) 00:11:37.805 11141.120 - 11200.698: 60.7220% ( 320) 00:11:37.805 11200.698 - 11260.276: 63.3710% ( 295) 00:11:37.805 11260.276 - 11319.855: 65.9213% ( 284) 00:11:37.805 11319.855 - 11379.433: 68.2112% ( 255) 00:11:37.805 11379.433 - 11439.011: 70.5819% ( 264) 00:11:37.805 11439.011 - 11498.589: 72.9885% ( 268) 00:11:37.805 11498.589 - 11558.167: 74.7845% ( 200) 00:11:37.805 11558.167 - 11617.745: 76.4278% ( 183) 00:11:37.805 11617.745 - 11677.324: 77.8646% ( 160) 00:11:37.806 11677.324 - 11736.902: 79.5079% ( 183) 00:11:37.806 11736.902 - 11796.480: 80.9626% ( 162) 00:11:37.806 11796.480 - 11856.058: 82.3186% ( 151) 00:11:37.806 11856.058 - 11915.636: 83.3513% ( 115) 00:11:37.806 11915.636 - 11975.215: 84.5546% ( 134) 00:11:37.806 11975.215 - 12034.793: 85.4975% ( 105) 00:11:37.806 12034.793 - 12094.371: 86.2608% ( 85) 00:11:37.806 12094.371 - 12153.949: 86.9971% ( 82) 00:11:37.806 12153.949 - 12213.527: 87.6886% ( 77) 00:11:37.806 12213.527 - 12273.105: 88.2004% ( 57) 00:11:37.806 12273.105 - 12332.684: 88.6853% ( 54) 00:11:37.806 12332.684 - 12392.262: 89.0984% ( 46) 00:11:37.806 12392.262 - 12451.840: 89.4666% ( 41) 00:11:37.806 12451.840 - 12511.418: 89.8617% ( 44) 00:11:37.806 12511.418 - 12570.996: 90.3466% ( 54) 00:11:37.806 12570.996 - 12630.575: 90.8046% ( 51) 00:11:37.806 12630.575 - 12690.153: 91.1548% ( 39) 00:11:37.806 12690.153 - 12749.731: 91.5499% ( 44) 00:11:37.806 12749.731 - 12809.309: 91.8912% ( 38) 00:11:37.806 12809.309 - 12868.887: 92.1336% ( 27) 00:11:37.806 12868.887 - 12928.465: 92.3222% ( 21) 00:11:37.806 12928.465 - 12988.044: 92.5108% ( 21) 00:11:37.806 12988.044 - 13047.622: 92.6904% ( 20) 00:11:37.806 13047.622 - 13107.200: 92.8430% ( 17) 00:11:37.806 13107.200 - 13166.778: 93.0585% ( 24) 00:11:37.806 13166.778 - 13226.356: 93.3459% ( 32) 00:11:37.806 13226.356 - 13285.935: 93.5794% ( 26) 00:11:37.806 13285.935 - 13345.513: 93.8129% ( 26) 00:11:37.806 13345.513 - 13405.091: 94.0823% ( 30) 00:11:37.806 13405.091 - 13464.669: 94.3068% ( 25) 00:11:37.806 13464.669 - 13524.247: 94.4594% ( 17) 00:11:37.806 13524.247 - 13583.825: 94.6121% ( 17) 00:11:37.806 13583.825 - 13643.404: 94.7557% ( 16) 00:11:37.806 13643.404 - 13702.982: 94.8725% ( 13) 00:11:37.806 13702.982 - 13762.560: 94.9533% ( 9) 00:11:37.806 13762.560 - 13822.138: 95.0431% ( 10) 00:11:37.806 13822.138 - 13881.716: 95.1419% ( 11) 00:11:37.806 13881.716 - 13941.295: 95.2317% ( 10) 00:11:37.806 13941.295 - 14000.873: 95.3394% ( 12) 00:11:37.806 14000.873 - 14060.451: 95.4652% ( 14) 00:11:37.806 14060.451 - 14120.029: 95.6537% ( 21) 00:11:37.806 14120.029 - 14179.607: 95.7795% ( 14) 00:11:37.806 14179.607 - 14239.185: 95.8603% ( 9) 00:11:37.806 14239.185 - 14298.764: 95.9052% ( 5) 00:11:37.806 14298.764 - 14358.342: 95.9231% ( 2) 00:11:37.806 14358.342 - 14417.920: 95.9501% ( 3) 00:11:37.806 14417.920 - 14477.498: 95.9591% ( 1) 00:11:37.806 14477.498 - 14537.076: 96.0040% ( 5) 00:11:37.806 14537.076 - 14596.655: 96.0219% ( 2) 00:11:37.806 14596.655 - 14656.233: 96.0489% ( 3) 00:11:37.806 14656.233 - 14715.811: 96.0938% ( 5) 00:11:37.806 14715.811 - 14775.389: 96.1207% ( 3) 00:11:37.806 14775.389 - 14834.967: 96.1925% ( 8) 00:11:37.806 14834.967 - 14894.545: 96.3182% ( 14) 00:11:37.806 14894.545 - 14954.124: 96.5338% ( 24) 00:11:37.806 14954.124 - 15013.702: 96.6954% ( 18) 00:11:37.806 15013.702 - 15073.280: 96.8750% ( 20) 00:11:37.806 15073.280 - 15132.858: 97.0905% ( 24) 00:11:37.806 15132.858 - 15192.436: 97.2073% ( 13) 00:11:37.806 15192.436 - 15252.015: 97.2971% ( 10) 00:11:37.806 15252.015 - 15371.171: 97.5036% ( 23) 00:11:37.806 15371.171 - 15490.327: 97.6562% ( 17) 00:11:37.806 15490.327 - 15609.484: 97.8538% ( 22) 00:11:37.806 15609.484 - 15728.640: 98.0963% ( 27) 00:11:37.806 15728.640 - 15847.796: 98.3387% ( 27) 00:11:37.806 15847.796 - 15966.953: 98.5632% ( 25) 00:11:37.806 15966.953 - 16086.109: 98.6620% ( 11) 00:11:37.806 16086.109 - 16205.265: 98.7069% ( 5) 00:11:37.806 16205.265 - 16324.422: 98.7518% ( 5) 00:11:37.806 16324.422 - 16443.578: 98.7967% ( 5) 00:11:37.806 16443.578 - 16562.735: 98.8416% ( 5) 00:11:37.806 16562.735 - 16681.891: 98.8506% ( 1) 00:11:37.806 22282.240 - 22401.396: 98.8596% ( 1) 00:11:37.806 22520.553 - 22639.709: 98.8955% ( 4) 00:11:37.806 22639.709 - 22758.865: 98.9404% ( 5) 00:11:37.806 22758.865 - 22878.022: 98.9673% ( 3) 00:11:37.806 22878.022 - 22997.178: 99.0122% ( 5) 00:11:37.806 22997.178 - 23116.335: 99.0571% ( 5) 00:11:37.806 23116.335 - 23235.491: 99.0930% ( 4) 00:11:37.806 23235.491 - 23354.647: 99.1020% ( 1) 00:11:37.806 23354.647 - 23473.804: 99.1290% ( 3) 00:11:37.806 23473.804 - 23592.960: 99.1469% ( 2) 00:11:37.806 23592.960 - 23712.116: 99.1649% ( 2) 00:11:37.806 23712.116 - 23831.273: 99.1918% ( 3) 00:11:37.806 23831.273 - 23950.429: 99.2098% ( 2) 00:11:37.806 23950.429 - 24069.585: 99.2277% ( 2) 00:11:37.806 24069.585 - 24188.742: 99.2457% ( 2) 00:11:37.806 24188.742 - 24307.898: 99.2726% ( 3) 00:11:37.806 24307.898 - 24427.055: 99.2906% ( 2) 00:11:37.806 24427.055 - 24546.211: 99.3175% ( 3) 00:11:37.806 24546.211 - 24665.367: 99.3355% ( 2) 00:11:37.806 24665.367 - 24784.524: 99.3534% ( 2) 00:11:37.806 24784.524 - 24903.680: 99.3804% ( 3) 00:11:37.806 24903.680 - 25022.836: 99.3983% ( 2) 00:11:37.806 25022.836 - 25141.993: 99.4163% ( 2) 00:11:37.806 25141.993 - 25261.149: 99.4253% ( 1) 00:11:37.806 28001.745 - 28120.902: 99.4702% ( 5) 00:11:37.806 28120.902 - 28240.058: 99.5690% ( 11) 00:11:37.807 29908.247 - 30027.404: 99.5779% ( 1) 00:11:37.807 30027.404 - 30146.560: 99.6049% ( 3) 00:11:37.807 30146.560 - 30265.716: 99.6318% ( 3) 00:11:37.807 30265.716 - 30384.873: 99.6588% ( 3) 00:11:37.807 30384.873 - 30504.029: 99.6767% ( 2) 00:11:37.807 30504.029 - 30742.342: 99.7306% ( 6) 00:11:37.807 30742.342 - 30980.655: 99.7845% ( 6) 00:11:37.807 30980.655 - 31218.967: 99.8294% ( 5) 00:11:37.807 31218.967 - 31457.280: 99.8833% ( 6) 00:11:37.807 31457.280 - 31695.593: 99.9282% ( 5) 00:11:37.807 31695.593 - 31933.905: 99.9731% ( 5) 00:11:37.807 31933.905 - 32172.218: 100.0000% ( 3) 00:11:37.807 00:11:37.807 18:18:49 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:37.807 00:11:37.807 real 0m2.823s 00:11:37.807 user 0m2.335s 00:11:37.807 sys 0m0.361s 00:11:37.807 18:18:49 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.807 18:18:49 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:37.807 ************************************ 00:11:37.807 END TEST nvme_perf 00:11:37.807 ************************************ 00:11:38.112 18:18:49 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:38.112 18:18:49 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:38.112 18:18:49 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:38.112 18:18:49 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.112 18:18:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.112 ************************************ 00:11:38.112 START TEST nvme_hello_world 00:11:38.112 ************************************ 00:11:38.112 18:18:49 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:38.112 Initializing NVMe Controllers 00:11:38.112 Attached to 0000:00:11.0 00:11:38.112 Namespace ID: 1 size: 5GB 00:11:38.112 Attached to 0000:00:13.0 00:11:38.112 Namespace ID: 1 size: 1GB 00:11:38.112 Attached to 0000:00:10.0 00:11:38.112 Namespace ID: 1 size: 6GB 00:11:38.112 Attached to 0000:00:12.0 00:11:38.112 Namespace ID: 1 size: 4GB 00:11:38.112 Namespace ID: 2 size: 4GB 00:11:38.112 Namespace ID: 3 size: 4GB 00:11:38.112 Initialization complete. 00:11:38.112 INFO: using host memory buffer for IO 00:11:38.112 Hello world! 00:11:38.112 INFO: using host memory buffer for IO 00:11:38.112 Hello world! 00:11:38.112 INFO: using host memory buffer for IO 00:11:38.112 Hello world! 00:11:38.112 INFO: using host memory buffer for IO 00:11:38.112 Hello world! 00:11:38.112 INFO: using host memory buffer for IO 00:11:38.112 Hello world! 00:11:38.112 INFO: using host memory buffer for IO 00:11:38.112 Hello world! 00:11:38.370 00:11:38.370 real 0m0.304s 00:11:38.370 user 0m0.124s 00:11:38.370 sys 0m0.143s 00:11:38.370 18:18:50 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.371 18:18:50 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:38.371 ************************************ 00:11:38.371 END TEST nvme_hello_world 00:11:38.371 ************************************ 00:11:38.371 18:18:50 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:38.371 18:18:50 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:38.371 18:18:50 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:38.371 18:18:50 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.371 18:18:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.371 ************************************ 00:11:38.371 START TEST nvme_sgl 00:11:38.371 ************************************ 00:11:38.371 18:18:50 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:38.628 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:38.628 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:38.628 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:38.628 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:38.628 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:38.628 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:38.628 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:38.628 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:38.628 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:38.628 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:38.628 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:38.628 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:38.628 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:38.628 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:38.628 NVMe Readv/Writev Request test 00:11:38.628 Attached to 0000:00:11.0 00:11:38.628 Attached to 0000:00:13.0 00:11:38.628 Attached to 0000:00:10.0 00:11:38.628 Attached to 0000:00:12.0 00:11:38.628 0000:00:11.0: build_io_request_2 test passed 00:11:38.628 0000:00:11.0: build_io_request_4 test passed 00:11:38.628 0000:00:11.0: build_io_request_5 test passed 00:11:38.628 0000:00:11.0: build_io_request_6 test passed 00:11:38.628 0000:00:11.0: build_io_request_7 test passed 00:11:38.628 0000:00:11.0: build_io_request_10 test passed 00:11:38.628 0000:00:10.0: build_io_request_2 test passed 00:11:38.628 0000:00:10.0: build_io_request_4 test passed 00:11:38.628 0000:00:10.0: build_io_request_5 test passed 00:11:38.628 0000:00:10.0: build_io_request_6 test passed 00:11:38.628 0000:00:10.0: build_io_request_7 test passed 00:11:38.628 0000:00:10.0: build_io_request_10 test passed 00:11:38.628 Cleaning up... 00:11:38.628 00:11:38.628 real 0m0.408s 00:11:38.628 user 0m0.209s 00:11:38.628 sys 0m0.152s 00:11:38.628 18:18:50 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.628 18:18:50 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:38.628 ************************************ 00:11:38.628 END TEST nvme_sgl 00:11:38.628 ************************************ 00:11:38.628 18:18:50 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:38.628 18:18:50 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:38.628 18:18:50 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:38.628 18:18:50 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.628 18:18:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.886 ************************************ 00:11:38.886 START TEST nvme_e2edp 00:11:38.886 ************************************ 00:11:38.886 18:18:50 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:39.145 NVMe Write/Read with End-to-End data protection test 00:11:39.145 Attached to 0000:00:11.0 00:11:39.145 Attached to 0000:00:13.0 00:11:39.145 Attached to 0000:00:10.0 00:11:39.145 Attached to 0000:00:12.0 00:11:39.145 Cleaning up... 00:11:39.145 00:11:39.145 real 0m0.326s 00:11:39.145 user 0m0.117s 00:11:39.145 sys 0m0.162s 00:11:39.145 18:18:50 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.145 18:18:50 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:39.145 ************************************ 00:11:39.145 END TEST nvme_e2edp 00:11:39.145 ************************************ 00:11:39.145 18:18:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:39.145 18:18:51 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:39.145 18:18:51 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:39.145 18:18:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.145 18:18:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.145 ************************************ 00:11:39.145 START TEST nvme_reserve 00:11:39.145 ************************************ 00:11:39.145 18:18:51 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:39.404 ===================================================== 00:11:39.404 NVMe Controller at PCI bus 0, device 17, function 0 00:11:39.404 ===================================================== 00:11:39.404 Reservations: Not Supported 00:11:39.404 ===================================================== 00:11:39.404 NVMe Controller at PCI bus 0, device 19, function 0 00:11:39.404 ===================================================== 00:11:39.404 Reservations: Not Supported 00:11:39.404 ===================================================== 00:11:39.404 NVMe Controller at PCI bus 0, device 16, function 0 00:11:39.404 ===================================================== 00:11:39.404 Reservations: Not Supported 00:11:39.404 ===================================================== 00:11:39.404 NVMe Controller at PCI bus 0, device 18, function 0 00:11:39.404 ===================================================== 00:11:39.404 Reservations: Not Supported 00:11:39.404 Reservation test passed 00:11:39.404 00:11:39.404 real 0m0.315s 00:11:39.404 user 0m0.101s 00:11:39.404 sys 0m0.156s 00:11:39.404 18:18:51 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.404 18:18:51 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:39.404 ************************************ 00:11:39.404 END TEST nvme_reserve 00:11:39.404 ************************************ 00:11:39.404 18:18:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:39.404 18:18:51 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:39.404 18:18:51 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:39.404 18:18:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.404 18:18:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.404 ************************************ 00:11:39.404 START TEST nvme_err_injection 00:11:39.404 ************************************ 00:11:39.404 18:18:51 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:39.971 NVMe Error Injection test 00:11:39.971 Attached to 0000:00:11.0 00:11:39.971 Attached to 0000:00:13.0 00:11:39.971 Attached to 0000:00:10.0 00:11:39.971 Attached to 0000:00:12.0 00:11:39.971 0000:00:10.0: get features failed as expected 00:11:39.971 0000:00:12.0: get features failed as expected 00:11:39.971 0000:00:11.0: get features failed as expected 00:11:39.971 0000:00:13.0: get features failed as expected 00:11:39.971 0000:00:11.0: get features successfully as expected 00:11:39.971 0000:00:13.0: get features successfully as expected 00:11:39.971 0000:00:10.0: get features successfully as expected 00:11:39.971 0000:00:12.0: get features successfully as expected 00:11:39.971 0000:00:11.0: read failed as expected 00:11:39.971 0000:00:13.0: read failed as expected 00:11:39.971 0000:00:10.0: read failed as expected 00:11:39.971 0000:00:12.0: read failed as expected 00:11:39.971 0000:00:11.0: read successfully as expected 00:11:39.971 0000:00:13.0: read successfully as expected 00:11:39.971 0000:00:10.0: read successfully as expected 00:11:39.971 0000:00:12.0: read successfully as expected 00:11:39.971 Cleaning up... 00:11:39.971 00:11:39.971 real 0m0.353s 00:11:39.971 user 0m0.144s 00:11:39.971 sys 0m0.156s 00:11:39.971 18:18:51 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.971 18:18:51 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:39.971 ************************************ 00:11:39.971 END TEST nvme_err_injection 00:11:39.971 ************************************ 00:11:39.971 18:18:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:39.971 18:18:51 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:39.971 18:18:51 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:39.971 18:18:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.971 18:18:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.971 ************************************ 00:11:39.971 START TEST nvme_overhead 00:11:39.971 ************************************ 00:11:39.971 18:18:51 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:41.349 Initializing NVMe Controllers 00:11:41.349 Attached to 0000:00:11.0 00:11:41.349 Attached to 0000:00:13.0 00:11:41.349 Attached to 0000:00:10.0 00:11:41.349 Attached to 0000:00:12.0 00:11:41.349 Initialization complete. Launching workers. 00:11:41.349 submit (in ns) avg, min, max = 15444.7, 13950.0, 105877.3 00:11:41.349 complete (in ns) avg, min, max = 10084.1, 8950.0, 45910.9 00:11:41.349 00:11:41.349 Submit histogram 00:11:41.349 ================ 00:11:41.349 Range in us Cumulative Count 00:11:41.349 13.905 - 13.964: 0.0398% ( 4) 00:11:41.349 13.964 - 14.022: 0.3385% ( 30) 00:11:41.349 14.022 - 14.080: 1.5931% ( 126) 00:11:41.349 14.080 - 14.138: 4.9188% ( 334) 00:11:41.349 14.138 - 14.196: 10.7338% ( 584) 00:11:41.349 14.196 - 14.255: 19.0879% ( 839) 00:11:41.349 14.255 - 14.313: 28.4975% ( 945) 00:11:41.349 14.313 - 14.371: 37.5286% ( 907) 00:11:41.349 14.371 - 14.429: 44.9866% ( 749) 00:11:41.349 14.429 - 14.487: 50.7617% ( 580) 00:11:41.349 14.487 - 14.545: 55.0732% ( 433) 00:11:41.349 14.545 - 14.604: 58.4088% ( 335) 00:11:41.349 14.604 - 14.662: 60.9280% ( 253) 00:11:41.349 14.662 - 14.720: 63.2281% ( 231) 00:11:41.349 14.720 - 14.778: 64.9806% ( 176) 00:11:41.349 14.778 - 14.836: 66.4941% ( 152) 00:11:41.349 14.836 - 14.895: 68.0275% ( 154) 00:11:41.349 14.895 - 15.011: 70.7259% ( 271) 00:11:41.349 15.011 - 15.127: 72.7372% ( 202) 00:11:41.349 15.127 - 15.244: 74.5295% ( 180) 00:11:41.349 15.244 - 15.360: 75.9235% ( 140) 00:11:41.349 15.360 - 15.476: 77.2279% ( 131) 00:11:41.349 15.476 - 15.593: 78.3730% ( 115) 00:11:41.349 15.593 - 15.709: 79.1198% ( 75) 00:11:41.349 15.709 - 15.825: 79.5977% ( 48) 00:11:41.349 15.825 - 15.942: 79.8765% ( 28) 00:11:41.349 15.942 - 16.058: 80.0757% ( 20) 00:11:41.349 16.058 - 16.175: 80.1553% ( 8) 00:11:41.349 16.175 - 16.291: 80.2250% ( 7) 00:11:41.349 16.291 - 16.407: 80.3146% ( 9) 00:11:41.349 16.407 - 16.524: 80.3246% ( 1) 00:11:41.349 16.524 - 16.640: 80.3445% ( 2) 00:11:41.349 16.640 - 16.756: 80.3843% ( 4) 00:11:41.349 16.756 - 16.873: 80.4341% ( 5) 00:11:41.349 16.873 - 16.989: 80.4441% ( 1) 00:11:41.349 16.989 - 17.105: 80.8025% ( 36) 00:11:41.349 17.105 - 17.222: 82.2663% ( 147) 00:11:41.349 17.222 - 17.338: 84.8651% ( 261) 00:11:41.349 17.338 - 17.455: 87.5137% ( 266) 00:11:41.349 17.455 - 17.571: 89.2960% ( 179) 00:11:41.349 17.571 - 17.687: 90.4710% ( 118) 00:11:41.349 17.687 - 17.804: 91.1680% ( 70) 00:11:41.349 17.804 - 17.920: 91.7156% ( 55) 00:11:41.349 17.920 - 18.036: 92.1139% ( 40) 00:11:41.349 18.036 - 18.153: 92.4724% ( 36) 00:11:41.349 18.153 - 18.269: 92.7711% ( 30) 00:11:41.349 18.269 - 18.385: 93.0399% ( 27) 00:11:41.349 18.385 - 18.502: 93.2689% ( 23) 00:11:41.349 18.502 - 18.618: 93.5577% ( 29) 00:11:41.349 18.618 - 18.735: 93.9460% ( 39) 00:11:41.349 18.735 - 18.851: 94.2547% ( 31) 00:11:41.349 18.851 - 18.967: 94.5534% ( 30) 00:11:41.349 18.967 - 19.084: 94.6829% ( 13) 00:11:41.349 19.084 - 19.200: 94.8123% ( 13) 00:11:41.349 19.200 - 19.316: 94.9716% ( 16) 00:11:41.349 19.316 - 19.433: 95.0513% ( 8) 00:11:41.349 19.433 - 19.549: 95.1110% ( 6) 00:11:41.349 19.549 - 19.665: 95.1807% ( 7) 00:11:41.349 19.665 - 19.782: 95.2106% ( 3) 00:11:41.349 19.782 - 19.898: 95.2604% ( 5) 00:11:41.349 19.898 - 20.015: 95.3400% ( 8) 00:11:41.349 20.015 - 20.131: 95.4197% ( 8) 00:11:41.349 20.131 - 20.247: 95.5193% ( 10) 00:11:41.349 20.247 - 20.364: 95.5890% ( 7) 00:11:41.349 20.364 - 20.480: 95.6985% ( 11) 00:11:41.349 20.480 - 20.596: 95.7483% ( 5) 00:11:41.349 20.596 - 20.713: 95.8080% ( 6) 00:11:41.349 20.713 - 20.829: 95.9275% ( 12) 00:11:41.349 20.829 - 20.945: 96.0470% ( 12) 00:11:41.349 20.945 - 21.062: 96.2262% ( 18) 00:11:41.349 21.062 - 21.178: 96.4055% ( 18) 00:11:41.349 21.178 - 21.295: 96.6046% ( 20) 00:11:41.349 21.295 - 21.411: 96.7938% ( 19) 00:11:41.349 21.411 - 21.527: 96.9332% ( 14) 00:11:41.349 21.527 - 21.644: 97.0427% ( 11) 00:11:41.349 21.644 - 21.760: 97.1224% ( 8) 00:11:41.349 21.760 - 21.876: 97.2020% ( 8) 00:11:41.349 21.876 - 21.993: 97.3315% ( 13) 00:11:41.349 21.993 - 22.109: 97.3912% ( 6) 00:11:41.349 22.109 - 22.225: 97.4808% ( 9) 00:11:41.349 22.225 - 22.342: 97.5505% ( 7) 00:11:41.349 22.342 - 22.458: 97.6103% ( 6) 00:11:41.349 22.458 - 22.575: 97.6800% ( 7) 00:11:41.349 22.575 - 22.691: 97.7397% ( 6) 00:11:41.349 22.691 - 22.807: 97.8492% ( 11) 00:11:41.349 22.807 - 22.924: 97.8791% ( 3) 00:11:41.349 22.924 - 23.040: 97.9389% ( 6) 00:11:41.349 23.040 - 23.156: 97.9687% ( 3) 00:11:41.349 23.156 - 23.273: 98.0285% ( 6) 00:11:41.349 23.273 - 23.389: 98.0583% ( 3) 00:11:41.349 23.389 - 23.505: 98.1280% ( 7) 00:11:41.349 23.505 - 23.622: 98.1778% ( 5) 00:11:41.349 23.622 - 23.738: 98.2077% ( 3) 00:11:41.350 23.738 - 23.855: 98.2177% ( 1) 00:11:41.350 23.855 - 23.971: 98.2774% ( 6) 00:11:41.350 23.971 - 24.087: 98.2973% ( 2) 00:11:41.350 24.087 - 24.204: 98.3272% ( 3) 00:11:41.350 24.204 - 24.320: 98.3869% ( 6) 00:11:41.350 24.320 - 24.436: 98.4069% ( 2) 00:11:41.350 24.436 - 24.553: 98.4666% ( 6) 00:11:41.350 24.553 - 24.669: 98.4865% ( 2) 00:11:41.350 24.669 - 24.785: 98.5761% ( 9) 00:11:41.350 24.785 - 24.902: 98.6458% ( 7) 00:11:41.350 24.902 - 25.018: 98.6956% ( 5) 00:11:41.350 25.018 - 25.135: 98.7554% ( 6) 00:11:41.350 25.135 - 25.251: 98.7952% ( 4) 00:11:41.350 25.251 - 25.367: 98.8350% ( 4) 00:11:41.350 25.367 - 25.484: 98.8649% ( 3) 00:11:41.350 25.484 - 25.600: 98.9047% ( 4) 00:11:41.350 25.600 - 25.716: 98.9844% ( 8) 00:11:41.350 25.716 - 25.833: 99.0541% ( 7) 00:11:41.350 25.833 - 25.949: 99.1138% ( 6) 00:11:41.350 25.949 - 26.065: 99.1337% ( 2) 00:11:41.350 26.065 - 26.182: 99.1536% ( 2) 00:11:41.350 26.182 - 26.298: 99.2233% ( 7) 00:11:41.350 26.298 - 26.415: 99.2632% ( 4) 00:11:41.350 26.415 - 26.531: 99.3229% ( 6) 00:11:41.350 26.531 - 26.647: 99.3827% ( 6) 00:11:41.350 26.647 - 26.764: 99.3926% ( 1) 00:11:41.350 26.764 - 26.880: 99.4225% ( 3) 00:11:41.350 26.880 - 26.996: 99.4324% ( 1) 00:11:41.350 26.996 - 27.113: 99.4922% ( 6) 00:11:41.350 27.113 - 27.229: 99.5021% ( 1) 00:11:41.350 27.229 - 27.345: 99.5320% ( 3) 00:11:41.350 27.345 - 27.462: 99.5420% ( 1) 00:11:41.350 27.462 - 27.578: 99.5519% ( 1) 00:11:41.350 27.695 - 27.811: 99.5619% ( 1) 00:11:41.350 27.811 - 27.927: 99.5918% ( 3) 00:11:41.350 27.927 - 28.044: 99.6117% ( 2) 00:11:41.350 28.160 - 28.276: 99.6216% ( 1) 00:11:41.350 28.276 - 28.393: 99.6415% ( 2) 00:11:41.350 28.509 - 28.625: 99.6515% ( 1) 00:11:41.350 28.858 - 28.975: 99.6714% ( 2) 00:11:41.350 29.091 - 29.207: 99.6814% ( 1) 00:11:41.350 29.440 - 29.556: 99.7112% ( 3) 00:11:41.350 29.556 - 29.673: 99.7212% ( 1) 00:11:41.350 29.789 - 30.022: 99.7411% ( 2) 00:11:41.350 30.022 - 30.255: 99.7511% ( 1) 00:11:41.350 30.255 - 30.487: 99.7610% ( 1) 00:11:41.350 30.720 - 30.953: 99.7710% ( 1) 00:11:41.350 30.953 - 31.185: 99.8009% ( 3) 00:11:41.350 31.185 - 31.418: 99.8108% ( 1) 00:11:41.350 31.418 - 31.651: 99.8208% ( 1) 00:11:41.350 31.651 - 31.884: 99.8307% ( 1) 00:11:41.350 31.884 - 32.116: 99.8407% ( 1) 00:11:41.350 33.280 - 33.513: 99.8506% ( 1) 00:11:41.350 33.745 - 33.978: 99.8606% ( 1) 00:11:41.350 33.978 - 34.211: 99.8805% ( 2) 00:11:41.350 34.211 - 34.444: 99.9004% ( 2) 00:11:41.350 34.909 - 35.142: 99.9104% ( 1) 00:11:41.350 35.375 - 35.607: 99.9203% ( 1) 00:11:41.350 35.840 - 36.073: 99.9303% ( 1) 00:11:41.350 37.935 - 38.167: 99.9403% ( 1) 00:11:41.350 53.295 - 53.527: 99.9502% ( 1) 00:11:41.350 54.924 - 55.156: 99.9602% ( 1) 00:11:41.350 56.553 - 56.785: 99.9701% ( 1) 00:11:41.350 61.905 - 62.371: 99.9801% ( 1) 00:11:41.350 93.556 - 94.022: 99.9900% ( 1) 00:11:41.350 105.658 - 106.124: 100.0000% ( 1) 00:11:41.350 00:11:41.350 Complete histogram 00:11:41.350 ================== 00:11:41.350 Range in us Cumulative Count 00:11:41.350 8.902 - 8.960: 0.0100% ( 1) 00:11:41.350 8.960 - 9.018: 0.3186% ( 31) 00:11:41.350 9.018 - 9.076: 2.2902% ( 198) 00:11:41.350 9.076 - 9.135: 8.1649% ( 590) 00:11:41.350 9.135 - 9.193: 19.0581% ( 1094) 00:11:41.350 9.193 - 9.251: 32.2812% ( 1328) 00:11:41.350 9.251 - 9.309: 44.3095% ( 1208) 00:11:41.350 9.309 - 9.367: 53.4103% ( 914) 00:11:41.350 9.367 - 9.425: 59.4743% ( 609) 00:11:41.350 9.425 - 9.484: 62.9194% ( 346) 00:11:41.350 9.484 - 9.542: 65.2295% ( 232) 00:11:41.350 9.542 - 9.600: 66.5737% ( 135) 00:11:41.350 9.600 - 9.658: 67.6690% ( 110) 00:11:41.350 9.658 - 9.716: 68.3461% ( 68) 00:11:41.350 9.716 - 9.775: 68.8938% ( 55) 00:11:41.350 9.775 - 9.833: 69.5609% ( 67) 00:11:41.350 9.833 - 9.891: 70.1782% ( 62) 00:11:41.350 9.891 - 9.949: 70.9748% ( 80) 00:11:41.350 9.949 - 10.007: 71.7614% ( 79) 00:11:41.350 10.007 - 10.065: 72.5480% ( 79) 00:11:41.350 10.065 - 10.124: 73.3147% ( 77) 00:11:41.350 10.124 - 10.182: 74.2906% ( 98) 00:11:41.350 10.182 - 10.240: 75.0274% ( 74) 00:11:41.350 10.240 - 10.298: 75.5551% ( 53) 00:11:41.350 10.298 - 10.356: 76.1028% ( 55) 00:11:41.350 10.356 - 10.415: 76.4811% ( 38) 00:11:41.350 10.415 - 10.473: 76.6404% ( 16) 00:11:41.350 10.473 - 10.531: 76.7998% ( 16) 00:11:41.350 10.531 - 10.589: 76.9989% ( 20) 00:11:41.350 10.589 - 10.647: 77.0985% ( 10) 00:11:41.350 10.647 - 10.705: 77.2379% ( 14) 00:11:41.350 10.705 - 10.764: 77.4071% ( 17) 00:11:41.350 10.764 - 10.822: 77.4470% ( 4) 00:11:41.350 10.822 - 10.880: 77.4868% ( 4) 00:11:41.350 10.880 - 10.938: 77.5864% ( 10) 00:11:41.350 10.938 - 10.996: 77.7258% ( 14) 00:11:41.350 10.996 - 11.055: 77.9050% ( 18) 00:11:41.350 11.055 - 11.113: 78.1539% ( 25) 00:11:41.350 11.113 - 11.171: 78.4825% ( 33) 00:11:41.350 11.171 - 11.229: 78.8609% ( 38) 00:11:41.350 11.229 - 11.287: 79.6475% ( 79) 00:11:41.350 11.287 - 11.345: 80.3545% ( 71) 00:11:41.350 11.345 - 11.404: 81.5493% ( 120) 00:11:41.350 11.404 - 11.462: 83.3615% ( 182) 00:11:41.350 11.462 - 11.520: 85.5223% ( 217) 00:11:41.350 11.520 - 11.578: 87.9120% ( 240) 00:11:41.350 11.578 - 11.636: 89.8138% ( 191) 00:11:41.350 11.636 - 11.695: 91.0883% ( 128) 00:11:41.350 11.695 - 11.753: 91.9944% ( 91) 00:11:41.350 11.753 - 11.811: 92.5421% ( 55) 00:11:41.350 11.811 - 11.869: 92.9005% ( 36) 00:11:41.350 11.869 - 11.927: 93.2391% ( 34) 00:11:41.350 11.927 - 11.985: 93.5776% ( 34) 00:11:41.350 11.985 - 12.044: 93.7768% ( 20) 00:11:41.350 12.044 - 12.102: 93.9859% ( 21) 00:11:41.350 12.102 - 12.160: 94.1352% ( 15) 00:11:41.350 12.160 - 12.218: 94.2149% ( 8) 00:11:41.350 12.218 - 12.276: 94.3344% ( 12) 00:11:41.350 12.276 - 12.335: 94.4638% ( 13) 00:11:41.350 12.335 - 12.393: 94.5534% ( 9) 00:11:41.350 12.393 - 12.451: 94.7127% ( 16) 00:11:41.350 12.451 - 12.509: 94.9517% ( 24) 00:11:41.350 12.509 - 12.567: 95.1309% ( 18) 00:11:41.350 12.567 - 12.625: 95.2703% ( 14) 00:11:41.350 12.625 - 12.684: 95.4297% ( 16) 00:11:41.350 12.684 - 12.742: 95.5193% ( 9) 00:11:41.350 12.742 - 12.800: 95.5989% ( 8) 00:11:41.350 12.800 - 12.858: 95.6487% ( 5) 00:11:41.350 12.858 - 12.916: 95.6786% ( 3) 00:11:41.350 12.916 - 12.975: 95.7085% ( 3) 00:11:41.350 12.975 - 13.033: 95.7483% ( 4) 00:11:41.350 13.033 - 13.091: 95.8877% ( 14) 00:11:41.350 13.091 - 13.149: 96.0171% ( 13) 00:11:41.350 13.149 - 13.207: 96.1764% ( 16) 00:11:41.350 13.207 - 13.265: 96.3656% ( 19) 00:11:41.350 13.265 - 13.324: 96.4951% ( 13) 00:11:41.350 13.324 - 13.382: 96.5449% ( 5) 00:11:41.350 13.382 - 13.440: 96.6046% ( 6) 00:11:41.350 13.440 - 13.498: 96.6444% ( 4) 00:11:41.350 13.498 - 13.556: 96.6843% ( 4) 00:11:41.350 13.556 - 13.615: 96.6942% ( 1) 00:11:41.350 13.615 - 13.673: 96.7241% ( 3) 00:11:41.350 13.673 - 13.731: 96.7440% ( 2) 00:11:41.350 13.731 - 13.789: 96.7639% ( 2) 00:11:41.350 13.789 - 13.847: 96.8137% ( 5) 00:11:41.350 13.847 - 13.905: 96.8934% ( 8) 00:11:41.350 13.905 - 13.964: 96.9830% ( 9) 00:11:41.350 13.964 - 14.022: 97.0726% ( 9) 00:11:41.350 14.022 - 14.080: 97.1622% ( 9) 00:11:41.350 14.080 - 14.138: 97.2120% ( 5) 00:11:41.350 14.138 - 14.196: 97.2916% ( 8) 00:11:41.350 14.196 - 14.255: 97.3514% ( 6) 00:11:41.350 14.255 - 14.313: 97.4111% ( 6) 00:11:41.350 14.313 - 14.371: 97.4709% ( 6) 00:11:41.350 14.371 - 14.429: 97.5107% ( 4) 00:11:41.350 14.429 - 14.487: 97.5406% ( 3) 00:11:41.350 14.487 - 14.545: 97.5505% ( 1) 00:11:41.350 14.545 - 14.604: 97.5904% ( 4) 00:11:41.350 14.604 - 14.662: 97.6202% ( 3) 00:11:41.350 14.662 - 14.720: 97.6302% ( 1) 00:11:41.350 14.720 - 14.778: 97.6401% ( 1) 00:11:41.350 14.895 - 15.011: 97.6899% ( 5) 00:11:41.350 15.011 - 15.127: 97.7198% ( 3) 00:11:41.350 15.127 - 15.244: 97.7497% ( 3) 00:11:41.350 15.244 - 15.360: 97.7795% ( 3) 00:11:41.350 15.360 - 15.476: 97.7895% ( 1) 00:11:41.350 15.476 - 15.593: 97.8492% ( 6) 00:11:41.350 15.593 - 15.709: 97.8592% ( 1) 00:11:41.350 15.709 - 15.825: 97.9289% ( 7) 00:11:41.350 15.825 - 15.942: 97.9687% ( 4) 00:11:41.350 15.942 - 16.058: 98.0484% ( 8) 00:11:41.350 16.058 - 16.175: 98.0783% ( 3) 00:11:41.350 16.175 - 16.291: 98.1280% ( 5) 00:11:41.350 16.291 - 16.407: 98.2276% ( 10) 00:11:41.351 16.407 - 16.524: 98.2874% ( 6) 00:11:41.351 16.524 - 16.640: 98.3670% ( 8) 00:11:41.351 16.640 - 16.756: 98.4467% ( 8) 00:11:41.351 16.756 - 16.873: 98.5064% ( 6) 00:11:41.351 16.873 - 16.989: 98.5662% ( 6) 00:11:41.351 16.989 - 17.105: 98.6259% ( 6) 00:11:41.351 17.105 - 17.222: 98.6857% ( 6) 00:11:41.351 17.222 - 17.338: 98.7056% ( 2) 00:11:41.351 17.338 - 17.455: 98.7554% ( 5) 00:11:41.351 17.455 - 17.571: 98.8051% ( 5) 00:11:41.351 17.571 - 17.687: 98.8948% ( 9) 00:11:41.351 17.687 - 17.804: 98.9645% ( 7) 00:11:41.351 17.804 - 17.920: 99.0242% ( 6) 00:11:41.351 17.920 - 18.036: 99.0839% ( 6) 00:11:41.351 18.036 - 18.153: 99.1238% ( 4) 00:11:41.351 18.153 - 18.269: 99.1337% ( 1) 00:11:41.351 18.269 - 18.385: 99.1636% ( 3) 00:11:41.351 18.385 - 18.502: 99.1835% ( 2) 00:11:41.351 18.502 - 18.618: 99.2034% ( 2) 00:11:41.351 18.618 - 18.735: 99.2333% ( 3) 00:11:41.351 18.735 - 18.851: 99.2532% ( 2) 00:11:41.351 18.851 - 18.967: 99.2731% ( 2) 00:11:41.351 18.967 - 19.084: 99.2930% ( 2) 00:11:41.351 19.084 - 19.200: 99.3130% ( 2) 00:11:41.351 19.200 - 19.316: 99.3229% ( 1) 00:11:41.351 19.316 - 19.433: 99.3428% ( 2) 00:11:41.351 19.433 - 19.549: 99.3528% ( 1) 00:11:41.351 19.549 - 19.665: 99.3827% ( 3) 00:11:41.351 19.665 - 19.782: 99.4026% ( 2) 00:11:41.351 19.782 - 19.898: 99.4125% ( 1) 00:11:41.351 19.898 - 20.015: 99.4225% ( 1) 00:11:41.351 20.015 - 20.131: 99.4324% ( 1) 00:11:41.351 20.131 - 20.247: 99.4723% ( 4) 00:11:41.351 20.247 - 20.364: 99.5021% ( 3) 00:11:41.351 20.364 - 20.480: 99.5121% ( 1) 00:11:41.351 20.480 - 20.596: 99.5221% ( 1) 00:11:41.351 20.596 - 20.713: 99.5519% ( 3) 00:11:41.351 20.713 - 20.829: 99.5718% ( 2) 00:11:41.351 20.829 - 20.945: 99.5818% ( 1) 00:11:41.351 20.945 - 21.062: 99.6117% ( 3) 00:11:41.351 21.178 - 21.295: 99.6415% ( 3) 00:11:41.351 21.295 - 21.411: 99.6515% ( 1) 00:11:41.351 21.411 - 21.527: 99.6615% ( 1) 00:11:41.351 21.527 - 21.644: 99.6814% ( 2) 00:11:41.351 21.760 - 21.876: 99.7013% ( 2) 00:11:41.351 21.876 - 21.993: 99.7212% ( 2) 00:11:41.351 21.993 - 22.109: 99.7411% ( 2) 00:11:41.351 22.109 - 22.225: 99.7511% ( 1) 00:11:41.351 22.225 - 22.342: 99.7710% ( 2) 00:11:41.351 22.342 - 22.458: 99.7809% ( 1) 00:11:41.351 22.458 - 22.575: 99.7909% ( 1) 00:11:41.351 22.575 - 22.691: 99.8009% ( 1) 00:11:41.351 23.040 - 23.156: 99.8108% ( 1) 00:11:41.351 23.156 - 23.273: 99.8307% ( 2) 00:11:41.351 23.273 - 23.389: 99.8506% ( 2) 00:11:41.351 23.622 - 23.738: 99.8606% ( 1) 00:11:41.351 24.087 - 24.204: 99.8706% ( 1) 00:11:41.351 25.018 - 25.135: 99.8805% ( 1) 00:11:41.351 25.135 - 25.251: 99.8905% ( 1) 00:11:41.351 25.367 - 25.484: 99.9004% ( 1) 00:11:41.351 26.182 - 26.298: 99.9104% ( 1) 00:11:41.351 26.298 - 26.415: 99.9203% ( 1) 00:11:41.351 26.531 - 26.647: 99.9303% ( 1) 00:11:41.351 27.345 - 27.462: 99.9403% ( 1) 00:11:41.351 29.324 - 29.440: 99.9502% ( 1) 00:11:41.351 29.440 - 29.556: 99.9602% ( 1) 00:11:41.351 29.789 - 30.022: 99.9701% ( 1) 00:11:41.351 35.840 - 36.073: 99.9801% ( 1) 00:11:41.351 39.098 - 39.331: 99.9900% ( 1) 00:11:41.351 45.847 - 46.080: 100.0000% ( 1) 00:11:41.351 00:11:41.351 00:11:41.351 real 0m1.316s 00:11:41.351 user 0m1.115s 00:11:41.351 sys 0m0.149s 00:11:41.351 18:18:53 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.351 18:18:53 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:41.351 ************************************ 00:11:41.351 END TEST nvme_overhead 00:11:41.351 ************************************ 00:11:41.351 18:18:53 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:41.351 18:18:53 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:41.351 18:18:53 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:41.351 18:18:53 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.351 18:18:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.351 ************************************ 00:11:41.351 START TEST nvme_arbitration 00:11:41.351 ************************************ 00:11:41.351 18:18:53 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:44.638 Initializing NVMe Controllers 00:11:44.638 Attached to 0000:00:11.0 00:11:44.638 Attached to 0000:00:13.0 00:11:44.638 Attached to 0000:00:10.0 00:11:44.638 Attached to 0000:00:12.0 00:11:44.638 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:11:44.638 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:11:44.638 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:11:44.638 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:44.638 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:44.638 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:44.638 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:44.638 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:44.638 Initialization complete. Launching workers. 00:11:44.638 Starting thread on core 1 with urgent priority queue 00:11:44.638 Starting thread on core 2 with urgent priority queue 00:11:44.638 Starting thread on core 3 with urgent priority queue 00:11:44.638 Starting thread on core 0 with urgent priority queue 00:11:44.638 QEMU NVMe Ctrl (12341 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:44.638 QEMU NVMe Ctrl (12342 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:44.638 QEMU NVMe Ctrl (12343 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:11:44.638 QEMU NVMe Ctrl (12342 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:11:44.638 QEMU NVMe Ctrl (12340 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:11:44.638 QEMU NVMe Ctrl (12342 ) core 3: 746.67 IO/s 133.93 secs/100000 ios 00:11:44.638 ======================================================== 00:11:44.638 00:11:44.638 00:11:44.638 real 0m3.408s 00:11:44.638 user 0m9.313s 00:11:44.638 sys 0m0.159s 00:11:44.638 18:18:56 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.638 18:18:56 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:44.638 ************************************ 00:11:44.638 END TEST nvme_arbitration 00:11:44.638 ************************************ 00:11:44.638 18:18:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:44.638 18:18:56 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:44.638 18:18:56 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:44.638 18:18:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.638 18:18:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:44.638 ************************************ 00:11:44.638 START TEST nvme_single_aen 00:11:44.638 ************************************ 00:11:44.638 18:18:56 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:45.207 Asynchronous Event Request test 00:11:45.207 Attached to 0000:00:11.0 00:11:45.207 Attached to 0000:00:13.0 00:11:45.207 Attached to 0000:00:10.0 00:11:45.207 Attached to 0000:00:12.0 00:11:45.207 Reset controller to setup AER completions for this process 00:11:45.207 Registering asynchronous event callbacks... 00:11:45.207 Getting orig temperature thresholds of all controllers 00:11:45.207 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.207 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.207 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.207 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.207 Setting all controllers temperature threshold low to trigger AER 00:11:45.207 Waiting for all controllers temperature threshold to be set lower 00:11:45.207 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.207 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:45.207 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.207 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:45.207 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.207 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:45.207 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.207 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:45.207 Waiting for all controllers to trigger AER and reset threshold 00:11:45.207 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.207 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.207 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.207 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.207 Cleaning up... 00:11:45.207 00:11:45.207 real 0m0.305s 00:11:45.207 user 0m0.112s 00:11:45.207 sys 0m0.146s 00:11:45.207 18:18:56 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.207 18:18:56 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:45.207 ************************************ 00:11:45.207 END TEST nvme_single_aen 00:11:45.207 ************************************ 00:11:45.207 18:18:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:45.207 18:18:56 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:45.207 18:18:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:45.207 18:18:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.207 18:18:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.207 ************************************ 00:11:45.207 START TEST nvme_doorbell_aers 00:11:45.207 ************************************ 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:11:45.207 18:18:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:45.208 18:18:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:45.208 18:18:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:45.208 18:18:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:45.208 18:18:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:45.208 18:18:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:45.208 18:18:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:45.466 [2024-07-22 18:18:57.325273] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:11:55.511 Executing: test_write_invalid_db 00:11:55.511 Waiting for AER completion... 00:11:55.511 Failure: test_write_invalid_db 00:11:55.511 00:11:55.511 Executing: test_invalid_db_write_overflow_sq 00:11:55.511 Waiting for AER completion... 00:11:55.511 Failure: test_invalid_db_write_overflow_sq 00:11:55.511 00:11:55.511 Executing: test_invalid_db_write_overflow_cq 00:11:55.511 Waiting for AER completion... 00:11:55.511 Failure: test_invalid_db_write_overflow_cq 00:11:55.511 00:11:55.511 18:19:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:55.511 18:19:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:55.511 [2024-07-22 18:19:07.395372] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:05.484 Executing: test_write_invalid_db 00:12:05.484 Waiting for AER completion... 00:12:05.484 Failure: test_write_invalid_db 00:12:05.484 00:12:05.484 Executing: test_invalid_db_write_overflow_sq 00:12:05.484 Waiting for AER completion... 00:12:05.484 Failure: test_invalid_db_write_overflow_sq 00:12:05.484 00:12:05.484 Executing: test_invalid_db_write_overflow_cq 00:12:05.484 Waiting for AER completion... 00:12:05.484 Failure: test_invalid_db_write_overflow_cq 00:12:05.484 00:12:05.484 18:19:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:05.484 18:19:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:05.484 [2024-07-22 18:19:17.453134] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:15.451 Executing: test_write_invalid_db 00:12:15.451 Waiting for AER completion... 00:12:15.451 Failure: test_write_invalid_db 00:12:15.451 00:12:15.451 Executing: test_invalid_db_write_overflow_sq 00:12:15.451 Waiting for AER completion... 00:12:15.451 Failure: test_invalid_db_write_overflow_sq 00:12:15.451 00:12:15.451 Executing: test_invalid_db_write_overflow_cq 00:12:15.451 Waiting for AER completion... 00:12:15.451 Failure: test_invalid_db_write_overflow_cq 00:12:15.451 00:12:15.451 18:19:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:15.451 18:19:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:15.712 [2024-07-22 18:19:27.485503] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 Executing: test_write_invalid_db 00:12:25.687 Waiting for AER completion... 00:12:25.687 Failure: test_write_invalid_db 00:12:25.687 00:12:25.687 Executing: test_invalid_db_write_overflow_sq 00:12:25.687 Waiting for AER completion... 00:12:25.687 Failure: test_invalid_db_write_overflow_sq 00:12:25.687 00:12:25.687 Executing: test_invalid_db_write_overflow_cq 00:12:25.687 Waiting for AER completion... 00:12:25.687 Failure: test_invalid_db_write_overflow_cq 00:12:25.687 00:12:25.687 00:12:25.687 real 0m40.261s 00:12:25.687 user 0m34.229s 00:12:25.687 sys 0m5.677s 00:12:25.687 18:19:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.687 18:19:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 ************************************ 00:12:25.687 END TEST nvme_doorbell_aers 00:12:25.687 ************************************ 00:12:25.687 18:19:37 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:25.687 18:19:37 nvme -- nvme/nvme.sh@97 -- # uname 00:12:25.687 18:19:37 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:25.687 18:19:37 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:25.687 18:19:37 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:25.687 18:19:37 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.687 18:19:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 ************************************ 00:12:25.687 START TEST nvme_multi_aen 00:12:25.687 ************************************ 00:12:25.687 18:19:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:25.687 [2024-07-22 18:19:37.556917] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.557024] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.557047] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.558874] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.558926] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.558946] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.560440] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.560487] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.560517] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.687 [2024-07-22 18:19:37.561929] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.688 [2024-07-22 18:19:37.561974] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.688 [2024-07-22 18:19:37.561992] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69986) is not found. Dropping the request. 00:12:25.688 Child process pid: 70506 00:12:25.946 [Child] Asynchronous Event Request test 00:12:25.946 [Child] Attached to 0000:00:11.0 00:12:25.946 [Child] Attached to 0000:00:13.0 00:12:25.946 [Child] Attached to 0000:00:10.0 00:12:25.946 [Child] Attached to 0000:00:12.0 00:12:25.946 [Child] Registering asynchronous event callbacks... 00:12:25.946 [Child] Getting orig temperature thresholds of all controllers 00:12:25.946 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:25.946 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.946 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.946 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.946 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.946 [Child] Cleaning up... 00:12:25.946 Asynchronous Event Request test 00:12:25.946 Attached to 0000:00:11.0 00:12:25.946 Attached to 0000:00:13.0 00:12:25.946 Attached to 0000:00:10.0 00:12:25.946 Attached to 0000:00:12.0 00:12:25.946 Reset controller to setup AER completions for this process 00:12:25.946 Registering asynchronous event callbacks... 00:12:25.946 Getting orig temperature thresholds of all controllers 00:12:25.946 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:25.946 Setting all controllers temperature threshold low to trigger AER 00:12:25.946 Waiting for all controllers temperature threshold to be set lower 00:12:25.946 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:25.946 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:25.946 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:25.946 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:25.946 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:25.946 Waiting for all controllers to trigger AER and reset threshold 00:12:25.946 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.946 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.947 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.947 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:25.947 Cleaning up... 00:12:25.947 00:12:25.947 real 0m0.602s 00:12:25.947 user 0m0.232s 00:12:25.947 sys 0m0.265s 00:12:25.947 18:19:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.947 18:19:37 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:25.947 ************************************ 00:12:25.947 END TEST nvme_multi_aen 00:12:25.947 ************************************ 00:12:25.947 18:19:37 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:25.947 18:19:37 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:25.947 18:19:37 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:25.947 18:19:37 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.947 18:19:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.947 ************************************ 00:12:25.947 START TEST nvme_startup 00:12:25.947 ************************************ 00:12:25.947 18:19:37 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:26.514 Initializing NVMe Controllers 00:12:26.514 Attached to 0000:00:11.0 00:12:26.514 Attached to 0000:00:13.0 00:12:26.514 Attached to 0000:00:10.0 00:12:26.514 Attached to 0000:00:12.0 00:12:26.514 Initialization complete. 00:12:26.514 Time used:194205.438 (us). 00:12:26.514 00:12:26.514 real 0m0.295s 00:12:26.514 user 0m0.118s 00:12:26.514 sys 0m0.136s 00:12:26.514 18:19:38 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.514 18:19:38 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:26.514 ************************************ 00:12:26.514 END TEST nvme_startup 00:12:26.514 ************************************ 00:12:26.514 18:19:38 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:26.514 18:19:38 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:26.514 18:19:38 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:26.514 18:19:38 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.514 18:19:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.514 ************************************ 00:12:26.514 START TEST nvme_multi_secondary 00:12:26.514 ************************************ 00:12:26.514 18:19:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:12:26.514 18:19:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70562 00:12:26.514 18:19:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:26.514 18:19:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70563 00:12:26.514 18:19:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:26.514 18:19:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:29.796 Initializing NVMe Controllers 00:12:29.796 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:29.796 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:29.796 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:29.796 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:29.796 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:29.796 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:29.796 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:29.796 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:29.796 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:29.796 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:29.796 Initialization complete. Launching workers. 00:12:29.796 ======================================================== 00:12:29.796 Latency(us) 00:12:29.796 Device Information : IOPS MiB/s Average min max 00:12:29.796 PCIE (0000:00:11.0) NSID 1 from core 1: 5664.58 22.13 2823.96 943.36 6825.06 00:12:29.796 PCIE (0000:00:13.0) NSID 1 from core 1: 5664.58 22.13 2824.24 934.31 6935.55 00:12:29.796 PCIE (0000:00:10.0) NSID 1 from core 1: 5664.58 22.13 2822.89 934.22 6966.73 00:12:29.796 PCIE (0000:00:12.0) NSID 1 from core 1: 5669.91 22.15 2821.37 963.95 6744.87 00:12:29.796 PCIE (0000:00:12.0) NSID 2 from core 1: 5669.91 22.15 2821.28 948.87 6268.62 00:12:29.796 PCIE (0000:00:12.0) NSID 3 from core 1: 5669.91 22.15 2821.13 951.94 6660.15 00:12:29.796 ======================================================== 00:12:29.796 Total : 34003.47 132.83 2822.48 934.22 6966.73 00:12:29.796 00:12:29.796 Initializing NVMe Controllers 00:12:29.796 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:29.796 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:29.796 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:29.796 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:29.796 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:29.796 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:29.796 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:29.796 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:29.796 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:29.796 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:29.796 Initialization complete. Launching workers. 00:12:29.796 ======================================================== 00:12:29.796 Latency(us) 00:12:29.796 Device Information : IOPS MiB/s Average min max 00:12:29.796 PCIE (0000:00:11.0) NSID 1 from core 2: 2301.06 8.99 6952.73 1940.29 14387.72 00:12:29.796 PCIE (0000:00:13.0) NSID 1 from core 2: 2301.06 8.99 6952.80 1896.44 14014.07 00:12:29.796 PCIE (0000:00:10.0) NSID 1 from core 2: 2301.06 8.99 6950.61 1750.34 14147.53 00:12:29.796 PCIE (0000:00:12.0) NSID 1 from core 2: 2301.06 8.99 6952.66 1752.23 13932.63 00:12:29.796 PCIE (0000:00:12.0) NSID 2 from core 2: 2301.06 8.99 6943.18 1843.53 15477.31 00:12:29.796 PCIE (0000:00:12.0) NSID 3 from core 2: 2301.06 8.99 6943.04 1690.96 18700.52 00:12:29.796 ======================================================== 00:12:29.796 Total : 13806.34 53.93 6949.17 1690.96 18700.52 00:12:29.796 00:12:30.054 18:19:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70562 00:12:31.972 Initializing NVMe Controllers 00:12:31.972 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:31.972 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:31.972 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:31.972 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:31.972 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:31.972 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:31.972 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:31.972 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:31.972 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:31.972 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:31.972 Initialization complete. Launching workers. 00:12:31.972 ======================================================== 00:12:31.972 Latency(us) 00:12:31.972 Device Information : IOPS MiB/s Average min max 00:12:31.972 PCIE (0000:00:11.0) NSID 1 from core 0: 8469.02 33.08 1888.81 942.95 6185.34 00:12:31.972 PCIE (0000:00:13.0) NSID 1 from core 0: 8469.02 33.08 1888.77 929.72 6256.65 00:12:31.972 PCIE (0000:00:10.0) NSID 1 from core 0: 8469.02 33.08 1887.47 888.02 6028.25 00:12:31.972 PCIE (0000:00:12.0) NSID 1 from core 0: 8469.02 33.08 1888.65 911.16 5949.20 00:12:31.972 PCIE (0000:00:12.0) NSID 2 from core 0: 8469.02 33.08 1888.59 920.13 6227.99 00:12:31.972 PCIE (0000:00:12.0) NSID 3 from core 0: 8469.02 33.08 1888.53 878.72 6288.08 00:12:31.972 ======================================================== 00:12:31.972 Total : 50814.14 198.49 1888.47 878.72 6288.08 00:12:31.972 00:12:31.972 18:19:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70563 00:12:31.972 18:19:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70639 00:12:31.972 18:19:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:31.972 18:19:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70640 00:12:31.972 18:19:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:31.972 18:19:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:35.257 Initializing NVMe Controllers 00:12:35.257 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:35.257 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:35.257 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:35.257 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:35.257 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:35.257 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:35.257 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:35.257 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:35.257 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:35.257 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:35.257 Initialization complete. Launching workers. 00:12:35.257 ======================================================== 00:12:35.257 Latency(us) 00:12:35.257 Device Information : IOPS MiB/s Average min max 00:12:35.257 PCIE (0000:00:11.0) NSID 1 from core 1: 5307.38 20.73 3014.19 1167.62 11474.55 00:12:35.257 PCIE (0000:00:13.0) NSID 1 from core 1: 5307.38 20.73 3014.42 1171.52 11237.67 00:12:35.257 PCIE (0000:00:10.0) NSID 1 from core 1: 5307.38 20.73 3013.00 1142.44 11985.01 00:12:35.257 PCIE (0000:00:12.0) NSID 1 from core 1: 5307.38 20.73 3014.61 1158.27 11911.32 00:12:35.257 PCIE (0000:00:12.0) NSID 2 from core 1: 5307.38 20.73 3014.93 1167.79 11435.30 00:12:35.257 PCIE (0000:00:12.0) NSID 3 from core 1: 5307.38 20.73 3014.96 1164.50 11784.44 00:12:35.257 ======================================================== 00:12:35.257 Total : 31844.28 124.39 3014.35 1142.44 11985.01 00:12:35.257 00:12:35.516 Initializing NVMe Controllers 00:12:35.516 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:35.516 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:35.516 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:35.516 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:35.516 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:35.516 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:35.516 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:35.516 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:35.516 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:35.516 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:35.516 Initialization complete. Launching workers. 00:12:35.516 ======================================================== 00:12:35.516 Latency(us) 00:12:35.516 Device Information : IOPS MiB/s Average min max 00:12:35.516 PCIE (0000:00:11.0) NSID 1 from core 0: 5407.47 21.12 2958.32 980.71 9606.39 00:12:35.516 PCIE (0000:00:13.0) NSID 1 from core 0: 5407.47 21.12 2958.15 968.68 9162.65 00:12:35.516 PCIE (0000:00:10.0) NSID 1 from core 0: 5407.47 21.12 2956.52 946.03 10378.60 00:12:35.516 PCIE (0000:00:12.0) NSID 1 from core 0: 5407.47 21.12 2957.83 993.32 9988.22 00:12:35.516 PCIE (0000:00:12.0) NSID 2 from core 0: 5407.47 21.12 2957.66 811.18 9800.72 00:12:35.516 PCIE (0000:00:12.0) NSID 3 from core 0: 5407.47 21.12 2957.51 784.37 9707.07 00:12:35.516 ======================================================== 00:12:35.516 Total : 32444.81 126.74 2957.66 784.37 10378.60 00:12:35.516 00:12:37.445 Initializing NVMe Controllers 00:12:37.445 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:37.445 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:37.445 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:37.445 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:37.445 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:37.445 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:37.445 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:37.445 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:37.445 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:37.445 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:37.445 Initialization complete. Launching workers. 00:12:37.445 ======================================================== 00:12:37.445 Latency(us) 00:12:37.445 Device Information : IOPS MiB/s Average min max 00:12:37.445 PCIE (0000:00:11.0) NSID 1 from core 2: 3660.65 14.30 4369.97 936.84 21677.20 00:12:37.445 PCIE (0000:00:13.0) NSID 1 from core 2: 3660.65 14.30 4370.09 947.11 21641.13 00:12:37.445 PCIE (0000:00:10.0) NSID 1 from core 2: 3660.65 14.30 4368.04 939.69 21979.07 00:12:37.445 PCIE (0000:00:12.0) NSID 1 from core 2: 3660.65 14.30 4369.95 957.15 26738.92 00:12:37.445 PCIE (0000:00:12.0) NSID 2 from core 2: 3660.65 14.30 4369.59 949.41 21343.64 00:12:37.445 PCIE (0000:00:12.0) NSID 3 from core 2: 3660.65 14.30 4369.72 897.97 21924.43 00:12:37.445 ======================================================== 00:12:37.445 Total : 21963.89 85.80 4369.56 897.97 26738.92 00:12:37.445 00:12:37.703 18:19:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70639 00:12:37.704 18:19:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70640 00:12:37.704 00:12:37.704 real 0m11.246s 00:12:37.704 user 0m18.589s 00:12:37.704 sys 0m0.962s 00:12:37.704 18:19:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.704 18:19:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 ************************************ 00:12:37.704 END TEST nvme_multi_secondary 00:12:37.704 ************************************ 00:12:37.704 18:19:49 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:37.704 18:19:49 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:37.704 18:19:49 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:37.704 18:19:49 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69571 ]] 00:12:37.704 18:19:49 nvme -- common/autotest_common.sh@1088 -- # kill 69571 00:12:37.704 18:19:49 nvme -- common/autotest_common.sh@1089 -- # wait 69571 00:12:37.704 [2024-07-22 18:19:49.587902] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.588028] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.588067] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.588101] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.591678] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.591770] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.591810] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.591867] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.595420] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.595500] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.595533] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.595566] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.598791] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.598851] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.598874] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.704 [2024-07-22 18:19:49.598896] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70505) is not found. Dropping the request. 00:12:37.962 18:19:49 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:12:37.962 18:19:49 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:12:37.962 18:19:49 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:37.962 18:19:49 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:37.962 18:19:49 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.962 18:19:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.962 ************************************ 00:12:37.962 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:37.962 ************************************ 00:12:37.962 18:19:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:38.220 * Looking for test storage... 00:12:38.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:38.220 18:19:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:38.220 18:19:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:38.220 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70794 00:12:38.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70794 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 70794 ']' 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.221 18:19:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:38.221 [2024-07-22 18:19:50.179743] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:38.221 [2024-07-22 18:19:50.179889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:12:38.479 [2024-07-22 18:19:50.365591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.738 [2024-07-22 18:19:50.658783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.738 [2024-07-22 18:19:50.659063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.738 [2024-07-22 18:19:50.659165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.738 [2024-07-22 18:19:50.659224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:39.676 nvme0n1 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_64G0z.txt 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:39.676 true 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721672391 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70821 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:39.676 18:19:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:41.598 [2024-07-22 18:19:53.558266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:41.598 [2024-07-22 18:19:53.558654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:41.598 [2024-07-22 18:19:53.558712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.598 [2024-07-22 18:19:53.558739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.598 [2024-07-22 18:19:53.560696] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.598 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70821 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70821 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70821 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:41.598 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_64G0z.txt 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_64G0z.txt 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70794 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 70794 ']' 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 70794 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70794 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.858 killing process with pid 70794 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70794' 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 70794 00:12:41.858 18:19:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 70794 00:12:44.415 18:19:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:44.415 18:19:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:44.415 ************************************ 00:12:44.415 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:44.415 ************************************ 00:12:44.415 00:12:44.415 real 0m6.019s 00:12:44.415 user 0m20.524s 00:12:44.415 sys 0m0.709s 00:12:44.415 18:19:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.415 18:19:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:44.415 18:19:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:44.415 18:19:55 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:44.415 18:19:55 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:44.415 18:19:55 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:44.415 18:19:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.415 18:19:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.415 ************************************ 00:12:44.415 START TEST nvme_fio 00:12:44.415 ************************************ 00:12:44.415 18:19:55 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:12:44.415 18:19:55 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:44.415 18:19:55 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:44.415 18:19:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:44.415 18:19:55 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:44.415 18:19:55 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:12:44.415 18:19:55 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:44.415 18:19:55 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:44.415 18:19:55 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:44.415 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:44.415 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:44.415 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:44.415 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:44.415 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:44.415 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:44.415 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:44.415 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:44.415 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:44.673 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:44.673 18:19:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:44.673 18:19:56 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:44.931 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:44.931 fio-3.35 00:12:44.931 Starting 1 thread 00:12:48.215 00:12:48.215 test: (groupid=0, jobs=1): err= 0: pid=70972: Mon Jul 22 18:19:59 2024 00:12:48.215 read: IOPS=15.9k, BW=62.0MiB/s (65.0MB/s)(124MiB/2001msec) 00:12:48.215 slat (nsec): min=4729, max=84678, avg=6729.34, stdev=2157.13 00:12:48.215 clat (usec): min=295, max=8957, avg=4011.43, stdev=721.47 00:12:48.215 lat (usec): min=301, max=9042, avg=4018.15, stdev=722.47 00:12:48.215 clat percentiles (usec): 00:12:48.215 | 1.00th=[ 3261], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:12:48.215 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 4113], 00:12:48.215 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 5211], 00:12:48.215 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7767], 99.95th=[ 7963], 00:12:48.215 | 99.99th=[ 8848] 00:12:48.215 bw ( KiB/s): min=53496, max=67424, per=97.31%, avg=61738.67, stdev=7307.69, samples=3 00:12:48.215 iops : min=13374, max=16856, avg=15434.67, stdev=1826.92, samples=3 00:12:48.215 write: IOPS=15.9k, BW=62.0MiB/s (65.0MB/s)(124MiB/2001msec); 0 zone resets 00:12:48.215 slat (nsec): min=4837, max=43029, avg=6890.78, stdev=2065.81 00:12:48.215 clat (usec): min=242, max=8877, avg=4022.87, stdev=716.88 00:12:48.215 lat (usec): min=248, max=8891, avg=4029.76, stdev=717.87 00:12:48.215 clat percentiles (usec): 00:12:48.215 | 1.00th=[ 3261], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3523], 00:12:48.215 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 4113], 00:12:48.215 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5145], 00:12:48.215 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7767], 99.95th=[ 7963], 00:12:48.215 | 99.99th=[ 8717] 00:12:48.215 bw ( KiB/s): min=53888, max=66792, per=96.56%, avg=61333.33, stdev=6677.46, samples=3 00:12:48.215 iops : min=13472, max=16698, avg=15333.33, stdev=1669.36, samples=3 00:12:48.215 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:12:48.215 lat (msec) : 2=0.05%, 4=56.96%, 10=42.95% 00:12:48.215 cpu : usr=98.80%, sys=0.20%, ctx=7, majf=0, minf=607 00:12:48.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:48.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:48.215 issued rwts: total=31739,31775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:48.215 00:12:48.215 Run status group 0 (all jobs): 00:12:48.215 READ: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=124MiB (130MB), run=2001-2001msec 00:12:48.215 WRITE: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=124MiB (130MB), run=2001-2001msec 00:12:48.215 ----------------------------------------------------- 00:12:48.215 Suppressions used: 00:12:48.215 count bytes template 00:12:48.215 1 32 /usr/src/fio/parse.c 00:12:48.215 1 8 libtcmalloc_minimal.so 00:12:48.215 ----------------------------------------------------- 00:12:48.215 00:12:48.215 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:48.215 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:48.215 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:48.215 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:48.474 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:48.474 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:48.732 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:48.732 18:20:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:48.732 18:20:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:48.990 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:48.990 fio-3.35 00:12:48.990 Starting 1 thread 00:12:53.176 00:12:53.176 test: (groupid=0, jobs=1): err= 0: pid=71033: Mon Jul 22 18:20:04 2024 00:12:53.176 read: IOPS=16.7k, BW=65.1MiB/s (68.3MB/s)(130MiB/2001msec) 00:12:53.176 slat (nsec): min=4647, max=63736, avg=6395.40, stdev=1773.31 00:12:53.176 clat (usec): min=318, max=7910, avg=3816.55, stdev=420.92 00:12:53.176 lat (usec): min=323, max=7948, avg=3822.95, stdev=421.49 00:12:53.176 clat percentiles (usec): 00:12:53.176 | 1.00th=[ 3032], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3523], 00:12:53.176 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3752], 00:12:53.176 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:12:53.176 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 6325], 99.95th=[ 6652], 00:12:53.176 | 99.99th=[ 7832] 00:12:53.176 bw ( KiB/s): min=64280, max=71640, per=100.00%, avg=67077.33, stdev=3984.93, samples=3 00:12:53.176 iops : min=16070, max=17910, avg=16769.33, stdev=996.23, samples=3 00:12:53.176 write: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(131MiB/2001msec); 0 zone resets 00:12:53.176 slat (nsec): min=4811, max=36047, avg=6520.62, stdev=1642.74 00:12:53.176 clat (usec): min=268, max=7854, avg=3827.72, stdev=418.98 00:12:53.176 lat (usec): min=274, max=7861, avg=3834.24, stdev=419.51 00:12:53.176 clat percentiles (usec): 00:12:53.176 | 1.00th=[ 3032], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3523], 00:12:53.176 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:12:53.176 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:12:53.176 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 6259], 99.95th=[ 6587], 00:12:53.176 | 99.99th=[ 7570] 00:12:53.176 bw ( KiB/s): min=64080, max=71216, per=100.00%, avg=67029.33, stdev=3725.44, samples=3 00:12:53.176 iops : min=16020, max=17804, avg=16757.33, stdev=931.36, samples=3 00:12:53.176 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:12:53.176 lat (msec) : 2=0.05%, 4=67.90%, 10=32.01% 00:12:53.176 cpu : usr=98.90%, sys=0.25%, ctx=4, majf=0, minf=606 00:12:53.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:53.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.176 issued rwts: total=33345,33409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.176 00:12:53.176 Run status group 0 (all jobs): 00:12:53.176 READ: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:12:53.176 WRITE: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=131MiB (137MB), run=2001-2001msec 00:12:53.176 ----------------------------------------------------- 00:12:53.176 Suppressions used: 00:12:53.176 count bytes template 00:12:53.176 1 32 /usr/src/fio/parse.c 00:12:53.176 1 8 libtcmalloc_minimal.so 00:12:53.176 ----------------------------------------------------- 00:12:53.176 00:12:53.176 18:20:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:53.176 18:20:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:53.176 18:20:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:53.176 18:20:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:53.176 18:20:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:53.176 18:20:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:53.176 18:20:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:53.176 18:20:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:53.176 18:20:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:53.435 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:53.435 fio-3.35 00:12:53.435 Starting 1 thread 00:12:57.622 00:12:57.622 test: (groupid=0, jobs=1): err= 0: pid=71098: Mon Jul 22 18:20:08 2024 00:12:57.622 read: IOPS=17.1k, BW=66.7MiB/s (70.0MB/s)(133MiB/2001msec) 00:12:57.622 slat (nsec): min=4492, max=62397, avg=6289.13, stdev=1769.71 00:12:57.622 clat (usec): min=318, max=9443, avg=3724.65, stdev=458.09 00:12:57.622 lat (usec): min=324, max=9505, avg=3730.94, stdev=458.79 00:12:57.622 clat percentiles (usec): 00:12:57.622 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:12:57.622 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3621], 00:12:57.622 | 70.00th=[ 3752], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4424], 00:12:57.622 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 7570], 99.95th=[ 7898], 00:12:57.622 | 99.99th=[ 9241] 00:12:57.622 bw ( KiB/s): min=64800, max=71824, per=98.72%, avg=67440.00, stdev=3823.00, samples=3 00:12:57.622 iops : min=16200, max=17956, avg=16860.00, stdev=955.75, samples=3 00:12:57.622 write: IOPS=17.1k, BW=66.8MiB/s (70.1MB/s)(134MiB/2001msec); 0 zone resets 00:12:57.622 slat (nsec): min=4513, max=48880, avg=6376.44, stdev=1780.39 00:12:57.622 clat (usec): min=236, max=9283, avg=3733.43, stdev=466.62 00:12:57.622 lat (usec): min=255, max=9294, avg=3739.80, stdev=467.31 00:12:57.622 clat percentiles (usec): 00:12:57.622 | 1.00th=[ 3163], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:12:57.622 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3621], 00:12:57.622 | 70.00th=[ 3752], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4424], 00:12:57.622 | 99.00th=[ 5014], 99.50th=[ 6128], 99.90th=[ 7635], 99.95th=[ 8029], 00:12:57.622 | 99.99th=[ 9110] 00:12:57.622 bw ( KiB/s): min=64560, max=71352, per=98.38%, avg=67336.00, stdev=3561.74, samples=3 00:12:57.622 iops : min=16140, max=17838, avg=16834.00, stdev=890.44, samples=3 00:12:57.622 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:57.622 lat (msec) : 2=0.13%, 4=73.60%, 10=26.23% 00:12:57.622 cpu : usr=99.00%, sys=0.10%, ctx=4, majf=0, minf=607 00:12:57.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:57.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:57.622 issued rwts: total=34174,34239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:57.622 00:12:57.622 Run status group 0 (all jobs): 00:12:57.622 READ: bw=66.7MiB/s (70.0MB/s), 66.7MiB/s-66.7MiB/s (70.0MB/s-70.0MB/s), io=133MiB (140MB), run=2001-2001msec 00:12:57.622 WRITE: bw=66.8MiB/s (70.1MB/s), 66.8MiB/s-66.8MiB/s (70.1MB/s-70.1MB/s), io=134MiB (140MB), run=2001-2001msec 00:12:57.622 ----------------------------------------------------- 00:12:57.622 Suppressions used: 00:12:57.622 count bytes template 00:12:57.622 1 32 /usr/src/fio/parse.c 00:12:57.622 1 8 libtcmalloc_minimal.so 00:12:57.622 ----------------------------------------------------- 00:12:57.622 00:12:57.622 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:57.622 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:57.622 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:57.622 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:57.622 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:57.622 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:57.881 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:57.881 18:20:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:57.881 18:20:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:58.141 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:58.141 fio-3.35 00:12:58.141 Starting 1 thread 00:13:02.321 00:13:02.321 test: (groupid=0, jobs=1): err= 0: pid=71161: Mon Jul 22 18:20:14 2024 00:13:02.321 read: IOPS=17.0k, BW=66.5MiB/s (69.8MB/s)(133MiB/2001msec) 00:13:02.321 slat (nsec): min=4660, max=62383, avg=6333.62, stdev=1752.57 00:13:02.321 clat (usec): min=277, max=8875, avg=3735.88, stdev=459.79 00:13:02.321 lat (usec): min=282, max=8938, avg=3742.21, stdev=460.41 00:13:02.321 clat percentiles (usec): 00:13:02.321 | 1.00th=[ 2835], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3458], 00:13:02.321 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3687], 00:13:02.321 | 70.00th=[ 3785], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:13:02.321 | 99.00th=[ 4686], 99.50th=[ 5473], 99.90th=[ 7308], 99.95th=[ 7504], 00:13:02.321 | 99.99th=[ 8717] 00:13:02.321 bw ( KiB/s): min=62152, max=72680, per=97.99%, avg=66749.33, stdev=5389.16, samples=3 00:13:02.321 iops : min=15538, max=18170, avg=16687.33, stdev=1347.29, samples=3 00:13:02.321 write: IOPS=17.1k, BW=66.6MiB/s (69.9MB/s)(133MiB/2001msec); 0 zone resets 00:13:02.321 slat (nsec): min=4756, max=74822, avg=6456.97, stdev=1805.24 00:13:02.321 clat (usec): min=306, max=8708, avg=3743.02, stdev=453.70 00:13:02.321 lat (usec): min=312, max=8721, avg=3749.48, stdev=454.30 00:13:02.321 clat percentiles (usec): 00:13:02.321 | 1.00th=[ 2868], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3458], 00:13:02.321 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 00:13:02.321 | 70.00th=[ 3785], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:13:02.321 | 99.00th=[ 4686], 99.50th=[ 5407], 99.90th=[ 7373], 99.95th=[ 7504], 00:13:02.321 | 99.99th=[ 8455] 00:13:02.322 bw ( KiB/s): min=62040, max=72136, per=97.72%, avg=66696.00, stdev=5093.46, samples=3 00:13:02.322 iops : min=15510, max=18034, avg=16674.00, stdev=1273.36, samples=3 00:13:02.322 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:02.322 lat (msec) : 2=0.14%, 4=75.63%, 10=24.19% 00:13:02.322 cpu : usr=99.15%, sys=0.00%, ctx=5, majf=0, minf=605 00:13:02.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:02.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.322 issued rwts: total=34076,34142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.322 00:13:02.322 Run status group 0 (all jobs): 00:13:02.322 READ: bw=66.5MiB/s (69.8MB/s), 66.5MiB/s-66.5MiB/s (69.8MB/s-69.8MB/s), io=133MiB (140MB), run=2001-2001msec 00:13:02.322 WRITE: bw=66.6MiB/s (69.9MB/s), 66.6MiB/s-66.6MiB/s (69.9MB/s-69.9MB/s), io=133MiB (140MB), run=2001-2001msec 00:13:02.322 ----------------------------------------------------- 00:13:02.322 Suppressions used: 00:13:02.322 count bytes template 00:13:02.322 1 32 /usr/src/fio/parse.c 00:13:02.322 1 8 libtcmalloc_minimal.so 00:13:02.322 ----------------------------------------------------- 00:13:02.322 00:13:02.322 18:20:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:02.322 18:20:14 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:02.322 00:13:02.322 real 0m18.334s 00:13:02.322 user 0m14.249s 00:13:02.322 sys 0m3.860s 00:13:02.322 18:20:14 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.322 ************************************ 00:13:02.322 END TEST nvme_fio 00:13:02.322 ************************************ 00:13:02.322 18:20:14 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:02.579 18:20:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:13:02.579 ************************************ 00:13:02.579 END TEST nvme 00:13:02.579 ************************************ 00:13:02.579 00:13:02.579 real 1m32.897s 00:13:02.579 user 3m46.209s 00:13:02.579 sys 0m16.663s 00:13:02.579 18:20:14 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.579 18:20:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.579 18:20:14 -- common/autotest_common.sh@1142 -- # return 0 00:13:02.579 18:20:14 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:13:02.579 18:20:14 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:02.579 18:20:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:02.579 18:20:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.579 18:20:14 -- common/autotest_common.sh@10 -- # set +x 00:13:02.579 ************************************ 00:13:02.579 START TEST nvme_scc 00:13:02.579 ************************************ 00:13:02.579 18:20:14 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:02.579 * Looking for test storage... 00:13:02.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:02.579 18:20:14 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.579 18:20:14 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.579 18:20:14 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.579 18:20:14 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.579 18:20:14 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.579 18:20:14 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.579 18:20:14 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.579 18:20:14 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:02.579 18:20:14 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:02.579 18:20:14 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:02.579 18:20:14 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.579 18:20:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:02.579 18:20:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:02.579 18:20:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:02.579 18:20:14 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:03.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:03.144 Waiting for block devices as requested 00:13:03.144 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.402 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.402 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.402 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.722 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:08.722 18:20:20 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:08.722 18:20:20 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:08.722 18:20:20 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:08.722 18:20:20 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:08.722 18:20:20 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.722 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:08.723 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:08.724 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.725 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:08.726 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.727 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:08.728 18:20:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:08.728 18:20:20 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:08.728 18:20:20 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:08.729 18:20:20 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:08.729 18:20:20 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:08.729 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:08.730 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:08.731 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.732 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:08.733 18:20:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.734 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.735 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:08.736 18:20:20 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:08.736 18:20:20 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:08.736 18:20:20 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:08.736 18:20:20 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:08.736 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:08.737 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:08.738 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:08.739 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.739 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.739 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.739 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:08.739 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:08.739 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:09.073 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.074 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:09.075 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.076 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.077 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.078 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.079 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:09.080 18:20:20 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:09.080 18:20:20 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:09.080 18:20:20 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:09.080 18:20:20 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.080 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.081 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.082 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:09.083 18:20:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:13:09.083 18:20:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:13:09.084 18:20:20 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:13:09.084 18:20:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:09.084 18:20:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:09.084 18:20:20 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:09.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:10.219 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.219 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.219 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.219 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.219 18:20:22 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:10.219 18:20:22 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:10.219 18:20:22 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.219 18:20:22 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:10.219 ************************************ 00:13:10.219 START TEST nvme_simple_copy 00:13:10.219 ************************************ 00:13:10.219 18:20:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:10.477 Initializing NVMe Controllers 00:13:10.477 Attaching to 0000:00:10.0 00:13:10.477 Controller supports SCC. Attached to 0000:00:10.0 00:13:10.477 Namespace ID: 1 size: 6GB 00:13:10.477 Initialization complete. 00:13:10.477 00:13:10.477 Controller QEMU NVMe Ctrl (12340 ) 00:13:10.477 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:10.477 Namespace Block Size:4096 00:13:10.477 Writing LBAs 0 to 63 with Random Data 00:13:10.477 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:10.477 LBAs matching Written Data: 64 00:13:10.477 00:13:10.477 real 0m0.313s 00:13:10.477 user 0m0.125s 00:13:10.477 sys 0m0.086s 00:13:10.477 18:20:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:10.477 18:20:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:10.477 ************************************ 00:13:10.477 END TEST nvme_simple_copy 00:13:10.477 ************************************ 00:13:10.736 18:20:22 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:13:10.736 ************************************ 00:13:10.736 END TEST nvme_scc 00:13:10.736 ************************************ 00:13:10.736 00:13:10.736 real 0m8.093s 00:13:10.736 user 0m1.264s 00:13:10.736 sys 0m1.696s 00:13:10.736 18:20:22 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:10.736 18:20:22 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:10.736 18:20:22 -- common/autotest_common.sh@1142 -- # return 0 00:13:10.736 18:20:22 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:13:10.736 18:20:22 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:13:10.736 18:20:22 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:13:10.736 18:20:22 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:13:10.736 18:20:22 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:10.736 18:20:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:10.736 18:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.736 18:20:22 -- common/autotest_common.sh@10 -- # set +x 00:13:10.736 ************************************ 00:13:10.736 START TEST nvme_fdp 00:13:10.736 ************************************ 00:13:10.736 18:20:22 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:13:10.736 * Looking for test storage... 00:13:10.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:10.736 18:20:22 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.736 18:20:22 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.736 18:20:22 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.736 18:20:22 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.736 18:20:22 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.736 18:20:22 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.736 18:20:22 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.736 18:20:22 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:10.736 18:20:22 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:10.736 18:20:22 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:10.736 18:20:22 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.736 18:20:22 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:10.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:11.253 Waiting for block devices as requested 00:13:11.253 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.253 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.527 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.527 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:16.928 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:16.928 18:20:28 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:16.928 18:20:28 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:16.928 18:20:28 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:16.928 18:20:28 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.928 18:20:28 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.928 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.929 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.930 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.931 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.932 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:16.933 18:20:28 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:16.933 18:20:28 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:16.933 18:20:28 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.933 18:20:28 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.933 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:16.934 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:16.935 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:16.936 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:16.937 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:16.938 18:20:28 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:16.938 18:20:28 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:16.938 18:20:28 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.938 18:20:28 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.938 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:16.939 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:16.940 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:16.941 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:16.942 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.943 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:16.944 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.945 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.946 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:16.947 18:20:28 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:16.947 18:20:28 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:16.947 18:20:28 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.947 18:20:28 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:16.947 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:16.948 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.949 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:16.950 18:20:28 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:16.951 18:20:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:13:16.951 18:20:28 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:13:16.951 18:20:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:16.951 18:20:28 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:16.951 18:20:28 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:17.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:18.082 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.082 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.082 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.082 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.082 18:20:29 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:18.082 18:20:29 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:18.082 18:20:29 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.082 18:20:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:18.082 ************************************ 00:13:18.082 START TEST nvme_flexible_data_placement 00:13:18.082 ************************************ 00:13:18.082 18:20:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:18.341 Initializing NVMe Controllers 00:13:18.342 Attaching to 0000:00:13.0 00:13:18.342 Controller supports FDP Attached to 0000:00:13.0 00:13:18.342 Namespace ID: 1 Endurance Group ID: 1 00:13:18.342 Initialization complete. 00:13:18.342 00:13:18.342 ================================== 00:13:18.342 == FDP tests for Namespace: #01 == 00:13:18.342 ================================== 00:13:18.342 00:13:18.342 Get Feature: FDP: 00:13:18.342 ================= 00:13:18.342 Enabled: Yes 00:13:18.342 FDP configuration Index: 0 00:13:18.342 00:13:18.342 FDP configurations log page 00:13:18.342 =========================== 00:13:18.342 Number of FDP configurations: 1 00:13:18.342 Version: 0 00:13:18.342 Size: 112 00:13:18.342 FDP Configuration Descriptor: 0 00:13:18.342 Descriptor Size: 96 00:13:18.342 Reclaim Group Identifier format: 2 00:13:18.342 FDP Volatile Write Cache: Not Present 00:13:18.342 FDP Configuration: Valid 00:13:18.342 Vendor Specific Size: 0 00:13:18.342 Number of Reclaim Groups: 2 00:13:18.342 Number of Recalim Unit Handles: 8 00:13:18.342 Max Placement Identifiers: 128 00:13:18.342 Number of Namespaces Suppprted: 256 00:13:18.342 Reclaim unit Nominal Size: 6000000 bytes 00:13:18.342 Estimated Reclaim Unit Time Limit: Not Reported 00:13:18.342 RUH Desc #000: RUH Type: Initially Isolated 00:13:18.342 RUH Desc #001: RUH Type: Initially Isolated 00:13:18.342 RUH Desc #002: RUH Type: Initially Isolated 00:13:18.342 RUH Desc #003: RUH Type: Initially Isolated 00:13:18.342 RUH Desc #004: RUH Type: Initially Isolated 00:13:18.342 RUH Desc #005: RUH Type: Initially Isolated 00:13:18.342 RUH Desc #006: RUH Type: Initially Isolated 00:13:18.342 RUH Desc #007: RUH Type: Initially Isolated 00:13:18.342 00:13:18.342 FDP reclaim unit handle usage log page 00:13:18.342 ====================================== 00:13:18.342 Number of Reclaim Unit Handles: 8 00:13:18.342 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:18.342 RUH Usage Desc #001: RUH Attributes: Unused 00:13:18.342 RUH Usage Desc #002: RUH Attributes: Unused 00:13:18.342 RUH Usage Desc #003: RUH Attributes: Unused 00:13:18.342 RUH Usage Desc #004: RUH Attributes: Unused 00:13:18.342 RUH Usage Desc #005: RUH Attributes: Unused 00:13:18.342 RUH Usage Desc #006: RUH Attributes: Unused 00:13:18.342 RUH Usage Desc #007: RUH Attributes: Unused 00:13:18.342 00:13:18.342 FDP statistics log page 00:13:18.342 ======================= 00:13:18.342 Host bytes with metadata written: 824676352 00:13:18.342 Media bytes with metadata written: 824758272 00:13:18.342 Media bytes erased: 0 00:13:18.342 00:13:18.342 FDP Reclaim unit handle status 00:13:18.342 ============================== 00:13:18.342 Number of RUHS descriptors: 2 00:13:18.342 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004d87 00:13:18.342 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:18.342 00:13:18.342 FDP write on placement id: 0 success 00:13:18.342 00:13:18.342 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:18.342 00:13:18.342 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:18.342 00:13:18.342 Get Feature: FDP Events for Placement handle: #0 00:13:18.342 ======================== 00:13:18.342 Number of FDP Events: 6 00:13:18.342 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:18.342 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:18.342 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:18.342 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:18.342 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:18.342 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:18.342 00:13:18.342 FDP events log page 00:13:18.342 =================== 00:13:18.342 Number of FDP events: 1 00:13:18.342 FDP Event #0: 00:13:18.342 Event Type: RU Not Written to Capacity 00:13:18.342 Placement Identifier: Valid 00:13:18.342 NSID: Valid 00:13:18.342 Location: Valid 00:13:18.342 Placement Identifier: 0 00:13:18.342 Event Timestamp: 7 00:13:18.342 Namespace Identifier: 1 00:13:18.342 Reclaim Group Identifier: 0 00:13:18.342 Reclaim Unit Handle Identifier: 0 00:13:18.342 00:13:18.342 FDP test passed 00:13:18.342 00:13:18.342 real 0m0.272s 00:13:18.342 user 0m0.092s 00:13:18.342 sys 0m0.082s 00:13:18.342 18:20:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.342 18:20:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:18.342 ************************************ 00:13:18.342 END TEST nvme_flexible_data_placement 00:13:18.342 ************************************ 00:13:18.342 18:20:30 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.342 00:13:18.342 real 0m7.758s 00:13:18.342 user 0m1.167s 00:13:18.342 sys 0m1.597s 00:13:18.342 18:20:30 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.342 18:20:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:18.342 ************************************ 00:13:18.342 END TEST nvme_fdp 00:13:18.342 ************************************ 00:13:18.342 18:20:30 -- common/autotest_common.sh@1142 -- # return 0 00:13:18.342 18:20:30 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:13:18.342 18:20:30 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:18.342 18:20:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:18.342 18:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.342 18:20:30 -- common/autotest_common.sh@10 -- # set +x 00:13:18.601 ************************************ 00:13:18.601 START TEST nvme_rpc 00:13:18.601 ************************************ 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:18.601 * Looking for test storage... 00:13:18.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:18.601 18:20:30 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:18.601 18:20:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:13:18.601 18:20:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:18.601 18:20:30 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72496 00:13:18.601 18:20:30 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:18.601 18:20:30 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:18.601 18:20:30 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72496 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72496 ']' 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.601 18:20:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.859 [2024-07-22 18:20:30.631908] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:18.859 [2024-07-22 18:20:30.632097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72496 ] 00:13:18.859 [2024-07-22 18:20:30.809912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.117 [2024-07-22 18:20:31.092010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.117 [2024-07-22 18:20:31.092024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.050 18:20:31 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.050 18:20:31 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:20.050 18:20:31 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:20.308 Nvme0n1 00:13:20.308 18:20:32 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:20.308 18:20:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:20.567 request: 00:13:20.567 { 00:13:20.567 "bdev_name": "Nvme0n1", 00:13:20.567 "filename": "non_existing_file", 00:13:20.567 "method": "bdev_nvme_apply_firmware", 00:13:20.567 "req_id": 1 00:13:20.567 } 00:13:20.567 Got JSON-RPC error response 00:13:20.567 response: 00:13:20.567 { 00:13:20.567 "code": -32603, 00:13:20.567 "message": "open file failed." 00:13:20.567 } 00:13:20.567 18:20:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:20.567 18:20:32 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:20.567 18:20:32 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:20.825 18:20:32 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:20.825 18:20:32 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72496 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72496 ']' 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72496 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72496 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:20.825 killing process with pid 72496 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72496' 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72496 00:13:20.825 18:20:32 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72496 00:13:23.354 00:13:23.354 real 0m4.507s 00:13:23.354 user 0m8.399s 00:13:23.354 sys 0m0.708s 00:13:23.354 18:20:34 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.354 18:20:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.354 ************************************ 00:13:23.354 END TEST nvme_rpc 00:13:23.354 ************************************ 00:13:23.354 18:20:34 -- common/autotest_common.sh@1142 -- # return 0 00:13:23.354 18:20:34 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:23.354 18:20:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:23.354 18:20:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.354 18:20:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.354 ************************************ 00:13:23.354 START TEST nvme_rpc_timeouts 00:13:23.354 ************************************ 00:13:23.354 18:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:23.354 * Looking for test storage... 00:13:23.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:23.354 18:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:23.354 18:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72572 00:13:23.354 18:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72572 00:13:23.354 18:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72596 00:13:23.354 18:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:23.354 18:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:23.354 18:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72596 00:13:23.354 18:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72596 ']' 00:13:23.354 18:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.354 18:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.354 18:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.354 18:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.354 18:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:23.354 [2024-07-22 18:20:35.092178] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:23.354 [2024-07-22 18:20:35.092335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72596 ] 00:13:23.354 [2024-07-22 18:20:35.254819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:23.612 [2024-07-22 18:20:35.538210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.612 [2024-07-22 18:20:35.538216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.545 18:20:36 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.545 18:20:36 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:13:24.545 Checking default timeout settings: 00:13:24.545 18:20:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:24.545 18:20:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:24.803 Making settings changes with rpc: 00:13:24.803 18:20:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:24.803 18:20:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:25.061 Check default vs. modified settings: 00:13:25.061 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:25.061 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72572 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72572 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:25.628 Setting action_on_timeout is changed as expected. 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72572 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72572 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:25.628 Setting timeout_us is changed as expected. 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72572 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72572 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:25.628 Setting timeout_admin_us is changed as expected. 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72572 /tmp/settings_modified_72572 00:13:25.628 18:20:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72596 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72596 ']' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72596 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72596 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72596' 00:13:25.628 killing process with pid 72596 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72596 00:13:25.628 18:20:37 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72596 00:13:28.162 RPC TIMEOUT SETTING TEST PASSED. 00:13:28.162 18:20:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:28.162 ************************************ 00:13:28.162 END TEST nvme_rpc_timeouts 00:13:28.162 ************************************ 00:13:28.162 00:13:28.162 real 0m4.780s 00:13:28.162 user 0m9.035s 00:13:28.162 sys 0m0.708s 00:13:28.162 18:20:39 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.162 18:20:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:28.162 18:20:39 -- common/autotest_common.sh@1142 -- # return 0 00:13:28.162 18:20:39 -- spdk/autotest.sh@243 -- # uname -s 00:13:28.162 18:20:39 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:13:28.162 18:20:39 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:28.162 18:20:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:28.162 18:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.162 18:20:39 -- common/autotest_common.sh@10 -- # set +x 00:13:28.162 ************************************ 00:13:28.162 START TEST sw_hotplug 00:13:28.162 ************************************ 00:13:28.162 18:20:39 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:28.162 * Looking for test storage... 00:13:28.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:28.162 18:20:39 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:28.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.422 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:28.422 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:28.422 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:28.422 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:28.422 18:20:40 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:28.422 18:20:40 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:28.422 18:20:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:28.422 18:20:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@230 -- # local class 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:13:28.422 18:20:40 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:28.422 18:20:40 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:28.422 18:20:40 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:28.422 18:20:40 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:28.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.991 Waiting for block devices as requested 00:13:28.991 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.249 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.249 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.249 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:34.515 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:34.515 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:34.515 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:34.773 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:34.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:34.773 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:35.341 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:35.341 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.341 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:35.599 18:20:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73455 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:35.599 18:20:47 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:35.599 18:20:47 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:35.599 18:20:47 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:35.599 18:20:47 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:35.599 18:20:47 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:35.599 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:35.600 18:20:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:35.858 Initializing NVMe Controllers 00:13:35.858 Attaching to 0000:00:10.0 00:13:35.858 Attaching to 0000:00:11.0 00:13:35.858 Attached to 0000:00:10.0 00:13:35.858 Attached to 0000:00:11.0 00:13:35.858 Initialization complete. Starting I/O... 00:13:35.858 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:35.858 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:35.858 00:13:36.793 QEMU NVMe Ctrl (12340 ): 1076 I/Os completed (+1076) 00:13:36.793 QEMU NVMe Ctrl (12341 ): 1168 I/Os completed (+1168) 00:13:36.793 00:13:38.163 QEMU NVMe Ctrl (12340 ): 2536 I/Os completed (+1460) 00:13:38.163 QEMU NVMe Ctrl (12341 ): 2670 I/Os completed (+1502) 00:13:38.163 00:13:39.099 QEMU NVMe Ctrl (12340 ): 4115 I/Os completed (+1579) 00:13:39.099 QEMU NVMe Ctrl (12341 ): 4330 I/Os completed (+1660) 00:13:39.099 00:13:40.032 QEMU NVMe Ctrl (12340 ): 5468 I/Os completed (+1353) 00:13:40.032 QEMU NVMe Ctrl (12341 ): 5852 I/Os completed (+1522) 00:13:40.032 00:13:40.966 QEMU NVMe Ctrl (12340 ): 7172 I/Os completed (+1704) 00:13:40.966 QEMU NVMe Ctrl (12341 ): 7666 I/Os completed (+1814) 00:13:40.966 00:13:41.531 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:41.531 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.531 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.531 [2024-07-22 18:20:53.526983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:41.531 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:41.531 [2024-07-22 18:20:53.529058] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.531 [2024-07-22 18:20:53.529149] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.531 [2024-07-22 18:20:53.529181] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.531 [2024-07-22 18:20:53.529209] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.531 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:41.531 [2024-07-22 18:20:53.534803] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.531 [2024-07-22 18:20:53.534880] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.531 [2024-07-22 18:20:53.534911] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.531 [2024-07-22 18:20:53.534936] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.789 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.789 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.789 [2024-07-22 18:20:53.566522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:41.789 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:41.789 [2024-07-22 18:20:53.568327] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.789 [2024-07-22 18:20:53.568399] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.789 [2024-07-22 18:20:53.568436] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.789 [2024-07-22 18:20:53.568462] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.789 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:41.789 [2024-07-22 18:20:53.571153] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.790 [2024-07-22 18:20:53.571210] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.790 [2024-07-22 18:20:53.571242] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.790 [2024-07-22 18:20:53.571266] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.790 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:41.790 EAL: Scan for (pci) bus failed. 00:13:41.790 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:41.790 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:41.790 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:41.790 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:41.790 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:41.790 00:13:41.790 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:42.047 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:42.047 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:42.047 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:42.047 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:42.047 Attaching to 0000:00:10.0 00:13:42.047 Attached to 0000:00:10.0 00:13:42.047 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:42.047 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:42.047 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:42.047 Attaching to 0000:00:11.0 00:13:42.047 Attached to 0000:00:11.0 00:13:42.981 QEMU NVMe Ctrl (12340 ): 1502 I/Os completed (+1502) 00:13:42.981 QEMU NVMe Ctrl (12341 ): 1383 I/Os completed (+1383) 00:13:42.981 00:13:43.917 QEMU NVMe Ctrl (12340 ): 3003 I/Os completed (+1501) 00:13:43.917 QEMU NVMe Ctrl (12341 ): 3006 I/Os completed (+1623) 00:13:43.917 00:13:44.853 QEMU NVMe Ctrl (12340 ): 4557 I/Os completed (+1554) 00:13:44.853 QEMU NVMe Ctrl (12341 ): 4640 I/Os completed (+1634) 00:13:44.853 00:13:45.787 QEMU NVMe Ctrl (12340 ): 6114 I/Os completed (+1557) 00:13:45.787 QEMU NVMe Ctrl (12341 ): 6415 I/Os completed (+1775) 00:13:45.787 00:13:47.166 QEMU NVMe Ctrl (12340 ): 7525 I/Os completed (+1411) 00:13:47.166 QEMU NVMe Ctrl (12341 ): 8032 I/Os completed (+1617) 00:13:47.166 00:13:47.767 QEMU NVMe Ctrl (12340 ): 8954 I/Os completed (+1429) 00:13:47.767 QEMU NVMe Ctrl (12341 ): 9558 I/Os completed (+1526) 00:13:47.767 00:13:49.144 QEMU NVMe Ctrl (12340 ): 10813 I/Os completed (+1859) 00:13:49.144 QEMU NVMe Ctrl (12341 ): 11658 I/Os completed (+2100) 00:13:49.144 00:13:50.079 QEMU NVMe Ctrl (12340 ): 12341 I/Os completed (+1528) 00:13:50.079 QEMU NVMe Ctrl (12341 ): 13445 I/Os completed (+1787) 00:13:50.079 00:13:51.033 QEMU NVMe Ctrl (12340 ): 13947 I/Os completed (+1606) 00:13:51.033 QEMU NVMe Ctrl (12341 ): 15268 I/Os completed (+1823) 00:13:51.033 00:13:51.967 QEMU NVMe Ctrl (12340 ): 15492 I/Os completed (+1545) 00:13:51.967 QEMU NVMe Ctrl (12341 ): 16981 I/Os completed (+1713) 00:13:51.967 00:13:52.902 QEMU NVMe Ctrl (12340 ): 17118 I/Os completed (+1626) 00:13:52.902 QEMU NVMe Ctrl (12341 ): 18773 I/Os completed (+1792) 00:13:52.902 00:13:53.837 QEMU NVMe Ctrl (12340 ): 18726 I/Os completed (+1608) 00:13:53.837 QEMU NVMe Ctrl (12341 ): 20581 I/Os completed (+1808) 00:13:53.837 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.096 [2024-07-22 18:21:05.923569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:54.096 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:54.096 [2024-07-22 18:21:05.925701] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.925894] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.926053] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.926132] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:54.096 [2024-07-22 18:21:05.929048] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.929113] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.929140] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.929165] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.096 [2024-07-22 18:21:05.949246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:54.096 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:54.096 [2024-07-22 18:21:05.951263] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.951326] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.951363] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.951419] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:54.096 [2024-07-22 18:21:05.954026] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.954082] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.954111] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 [2024-07-22 18:21:05.954136] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:54.096 18:21:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:54.096 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:54.096 EAL: Scan for (pci) bus failed. 00:13:54.096 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.096 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.096 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:54.354 Attaching to 0000:00:10.0 00:13:54.354 Attached to 0000:00:10.0 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.354 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:54.354 Attaching to 0000:00:11.0 00:13:54.354 Attached to 0000:00:11.0 00:13:54.921 QEMU NVMe Ctrl (12340 ): 1036 I/Os completed (+1036) 00:13:54.921 QEMU NVMe Ctrl (12341 ): 891 I/Os completed (+891) 00:13:54.921 00:13:55.856 QEMU NVMe Ctrl (12340 ): 2655 I/Os completed (+1619) 00:13:55.856 QEMU NVMe Ctrl (12341 ): 2569 I/Os completed (+1678) 00:13:55.856 00:13:56.799 QEMU NVMe Ctrl (12340 ): 4243 I/Os completed (+1588) 00:13:56.799 QEMU NVMe Ctrl (12341 ): 4310 I/Os completed (+1741) 00:13:56.799 00:13:57.734 QEMU NVMe Ctrl (12340 ): 5676 I/Os completed (+1433) 00:13:57.734 QEMU NVMe Ctrl (12341 ): 5906 I/Os completed (+1596) 00:13:57.734 00:13:59.109 QEMU NVMe Ctrl (12340 ): 7185 I/Os completed (+1509) 00:13:59.109 QEMU NVMe Ctrl (12341 ): 7637 I/Os completed (+1731) 00:13:59.109 00:14:00.044 QEMU NVMe Ctrl (12340 ): 8720 I/Os completed (+1535) 00:14:00.044 QEMU NVMe Ctrl (12341 ): 9572 I/Os completed (+1935) 00:14:00.044 00:14:00.979 QEMU NVMe Ctrl (12340 ): 10330 I/Os completed (+1610) 00:14:00.979 QEMU NVMe Ctrl (12341 ): 11479 I/Os completed (+1907) 00:14:00.979 00:14:01.925 QEMU NVMe Ctrl (12340 ): 11886 I/Os completed (+1556) 00:14:01.925 QEMU NVMe Ctrl (12341 ): 13205 I/Os completed (+1726) 00:14:01.925 00:14:02.860 QEMU NVMe Ctrl (12340 ): 13462 I/Os completed (+1576) 00:14:02.860 QEMU NVMe Ctrl (12341 ): 14945 I/Os completed (+1740) 00:14:02.860 00:14:03.796 QEMU NVMe Ctrl (12340 ): 14907 I/Os completed (+1445) 00:14:03.796 QEMU NVMe Ctrl (12341 ): 16520 I/Os completed (+1575) 00:14:03.796 00:14:04.784 QEMU NVMe Ctrl (12340 ): 16359 I/Os completed (+1452) 00:14:04.784 QEMU NVMe Ctrl (12341 ): 18098 I/Os completed (+1578) 00:14:04.784 00:14:06.159 QEMU NVMe Ctrl (12340 ): 17818 I/Os completed (+1459) 00:14:06.159 QEMU NVMe Ctrl (12341 ): 19749 I/Os completed (+1651) 00:14:06.159 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.417 [2024-07-22 18:21:18.266188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:06.417 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:06.417 [2024-07-22 18:21:18.270259] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.270378] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.270427] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.270480] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:06.417 [2024-07-22 18:21:18.275250] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.275346] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.275399] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.275439] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.417 [2024-07-22 18:21:18.299399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:06.417 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:06.417 [2024-07-22 18:21:18.303114] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.303399] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.303632] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.303874] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:06.417 [2024-07-22 18:21:18.308647] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.308900] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.309114] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 [2024-07-22 18:21:18.309181] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:06.417 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:06.417 EAL: Scan for (pci) bus failed. 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:06.417 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:06.732 Attaching to 0000:00:10.0 00:14:06.732 Attached to 0000:00:10.0 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:06.732 18:21:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:06.732 Attaching to 0000:00:11.0 00:14:06.732 Attached to 0000:00:11.0 00:14:06.732 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:06.732 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:06.732 [2024-07-22 18:21:18.639050] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:18.931 18:21:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:18.931 18:21:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:18.931 18:21:30 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.11 00:14:18.931 18:21:30 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.11 00:14:18.931 18:21:30 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:18.931 18:21:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.11 00:14:18.931 18:21:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.11 2 00:14:18.931 remove_attach_helper took 43.11s to complete (handling 2 nvme drive(s)) 18:21:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73455 00:14:25.510 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73455) - No such process 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73455 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:25.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74000 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:25.510 18:21:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74000 00:14:25.510 18:21:36 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74000 ']' 00:14:25.510 18:21:36 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.510 18:21:36 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.510 18:21:36 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.510 18:21:36 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.510 18:21:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.510 [2024-07-22 18:21:36.793339] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:25.510 [2024-07-22 18:21:36.793905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74000 ] 00:14:25.510 [2024-07-22 18:21:36.962082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.510 [2024-07-22 18:21:37.249127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.076 18:21:38 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.076 18:21:38 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:14:26.077 18:21:38 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:26.077 18:21:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:32.643 18:21:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.643 18:21:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:32.643 18:21:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:32.643 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:32.643 [2024-07-22 18:21:44.180435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:32.643 [2024-07-22 18:21:44.185031] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.185129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.185190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.643 [2024-07-22 18:21:44.185231] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.185263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.185288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.643 [2024-07-22 18:21:44.185319] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.185343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.185367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.643 [2024-07-22 18:21:44.185389] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.185415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.185435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.643 [2024-07-22 18:21:44.580407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:32.643 [2024-07-22 18:21:44.583625] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.583708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.583733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.643 [2024-07-22 18:21:44.583765] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.583782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.583799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.643 [2024-07-22 18:21:44.583815] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.583832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.583845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.643 [2024-07-22 18:21:44.583863] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:32.643 [2024-07-22 18:21:44.583877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.643 [2024-07-22 18:21:44.583893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:32.902 18:21:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.902 18:21:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:32.902 18:21:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:32.902 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:33.160 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:33.160 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:33.160 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:33.160 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:33.160 18:21:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:33.160 18:21:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:33.160 18:21:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:33.160 18:21:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:45.403 18:21:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:45.403 18:21:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:45.403 18:21:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:45.403 18:21:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.403 18:21:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:45.403 [2024-07-22 18:21:57.180633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:45.403 [2024-07-22 18:21:57.184040] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.403 [2024-07-22 18:21:57.184227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.403 [2024-07-22 18:21:57.184394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.403 [2024-07-22 18:21:57.184549] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.403 [2024-07-22 18:21:57.184665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.403 [2024-07-22 18:21:57.184808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.403 [2024-07-22 18:21:57.185000] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.403 [2024-07-22 18:21:57.185122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.403 [2024-07-22 18:21:57.185270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.403 [2024-07-22 18:21:57.185403] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.403 [2024-07-22 18:21:57.185519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.403 [2024-07-22 18:21:57.185660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.403 [2024-07-22 18:21:57.185858] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:45.403 [2024-07-22 18:21:57.185980] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:45.403 [2024-07-22 18:21:57.186102] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:45.403 [2024-07-22 18:21:57.186217] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:45.403 18:21:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:45.403 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:45.661 [2024-07-22 18:21:57.580659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:45.661 [2024-07-22 18:21:57.583832] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.661 [2024-07-22 18:21:57.584028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.661 [2024-07-22 18:21:57.584186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.661 [2024-07-22 18:21:57.584410] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.661 [2024-07-22 18:21:57.584538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.661 [2024-07-22 18:21:57.584701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.661 [2024-07-22 18:21:57.584897] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.661 [2024-07-22 18:21:57.585024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.661 [2024-07-22 18:21:57.585173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.661 [2024-07-22 18:21:57.585323] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.661 [2024-07-22 18:21:57.585526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.661 [2024-07-22 18:21:57.585605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:45.919 18:21:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.919 18:21:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:45.919 18:21:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:45.919 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:46.177 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:46.177 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:46.177 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:46.177 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:46.177 18:21:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:46.177 18:21:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:46.177 18:21:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:46.177 18:21:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:58.416 18:22:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.416 18:22:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:58.416 18:22:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:58.416 [2024-07-22 18:22:10.181414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:58.416 18:22:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:58.416 18:22:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:58.416 [2024-07-22 18:22:10.185030] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.416 [2024-07-22 18:22:10.185075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.416 [2024-07-22 18:22:10.185108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.416 [2024-07-22 18:22:10.185136] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.416 [2024-07-22 18:22:10.185157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.416 [2024-07-22 18:22:10.185172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.416 [2024-07-22 18:22:10.185195] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.416 [2024-07-22 18:22:10.185211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.416 [2024-07-22 18:22:10.185230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.416 [2024-07-22 18:22:10.185256] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.416 [2024-07-22 18:22:10.185275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.416 [2024-07-22 18:22:10.185290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.416 18:22:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:58.416 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:58.675 [2024-07-22 18:22:10.581438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:58.675 [2024-07-22 18:22:10.585043] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.675 [2024-07-22 18:22:10.585117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.675 [2024-07-22 18:22:10.585156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.675 [2024-07-22 18:22:10.585188] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.675 [2024-07-22 18:22:10.585205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.675 [2024-07-22 18:22:10.585230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.675 [2024-07-22 18:22:10.585248] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.675 [2024-07-22 18:22:10.585270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.675 [2024-07-22 18:22:10.585285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.675 [2024-07-22 18:22:10.585306] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.675 [2024-07-22 18:22:10.585321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.675 [2024-07-22 18:22:10.585340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:58.934 18:22:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.934 18:22:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:58.934 18:22:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.934 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:59.192 18:22:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:59.192 18:22:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:59.192 18:22:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:59.192 18:22:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:59.192 18:22:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:59.192 18:22:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:59.192 18:22:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:59.192 18:22:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.09 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.09 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.09 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.09 2 00:15:11.463 remove_attach_helper took 45.09s to complete (handling 2 nvme drive(s)) 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:15:11.463 18:22:23 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:11.463 18:22:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:18.026 18:22:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.026 18:22:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:18.026 [2024-07-22 18:22:29.297076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:18.026 [2024-07-22 18:22:29.299108] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.026 [2024-07-22 18:22:29.299175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.026 [2024-07-22 18:22:29.299210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.026 [2024-07-22 18:22:29.299265] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.026 [2024-07-22 18:22:29.299292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.026 [2024-07-22 18:22:29.299308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.026 [2024-07-22 18:22:29.299326] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.026 [2024-07-22 18:22:29.299340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.026 [2024-07-22 18:22:29.299359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.026 [2024-07-22 18:22:29.299387] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.026 [2024-07-22 18:22:29.299406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.026 [2024-07-22 18:22:29.299420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.026 18:22:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:18.026 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:18.026 [2024-07-22 18:22:29.697076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:18.026 [2024-07-22 18:22:29.699900] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.027 [2024-07-22 18:22:29.699961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.027 [2024-07-22 18:22:29.699985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.027 [2024-07-22 18:22:29.700015] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.027 [2024-07-22 18:22:29.700031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.027 [2024-07-22 18:22:29.700062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.027 [2024-07-22 18:22:29.700078] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.027 [2024-07-22 18:22:29.700094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.027 [2024-07-22 18:22:29.700108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.027 [2024-07-22 18:22:29.700138] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:18.027 [2024-07-22 18:22:29.700152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.027 [2024-07-22 18:22:29.700171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:18.027 18:22:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.027 18:22:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:18.027 18:22:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:18.027 18:22:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:18.285 18:22:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:30.629 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:30.629 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.630 18:22:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.630 18:22:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.630 18:22:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.630 18:22:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.630 18:22:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.630 [2024-07-22 18:22:42.297732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:30.630 [2024-07-22 18:22:42.299849] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.630 [2024-07-22 18:22:42.299997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.630 [2024-07-22 18:22:42.300143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.630 [2024-07-22 18:22:42.300342] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.630 [2024-07-22 18:22:42.300474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.630 [2024-07-22 18:22:42.300690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.630 [2024-07-22 18:22:42.300831] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.630 [2024-07-22 18:22:42.300963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.630 [2024-07-22 18:22:42.301073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.630 [2024-07-22 18:22:42.301118] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.630 [2024-07-22 18:22:42.301140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.630 [2024-07-22 18:22:42.301154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.630 18:22:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:30.630 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:30.889 [2024-07-22 18:22:42.697744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:30.889 [2024-07-22 18:22:42.699842] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.889 [2024-07-22 18:22:42.699913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.889 [2024-07-22 18:22:42.699935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.889 [2024-07-22 18:22:42.699965] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.889 [2024-07-22 18:22:42.699981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.889 [2024-07-22 18:22:42.699997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.889 [2024-07-22 18:22:42.700012] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.889 [2024-07-22 18:22:42.700029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.889 [2024-07-22 18:22:42.700042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.889 [2024-07-22 18:22:42.700059] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.889 [2024-07-22 18:22:42.700073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.889 [2024-07-22 18:22:42.700088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.889 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:30.889 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:30.889 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:30.889 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.889 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.889 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.889 18:22:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.889 18:22:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.889 18:22:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.147 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:31.147 18:22:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:31.147 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:31.405 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:31.405 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:31.405 18:22:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:43.608 18:22:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.608 18:22:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:43.608 18:22:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:43.608 [2024-07-22 18:22:55.297978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:43.608 [2024-07-22 18:22:55.300103] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.608 [2024-07-22 18:22:55.300205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.608 [2024-07-22 18:22:55.300291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.608 [2024-07-22 18:22:55.300362] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.608 [2024-07-22 18:22:55.300414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.608 [2024-07-22 18:22:55.300475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.608 [2024-07-22 18:22:55.300539] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.608 [2024-07-22 18:22:55.300581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.608 [2024-07-22 18:22:55.300641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.608 [2024-07-22 18:22:55.300776] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.608 [2024-07-22 18:22:55.300826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.608 [2024-07-22 18:22:55.300887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:43.608 18:22:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.608 18:22:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:43.608 18:22:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.608 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:43.609 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:43.868 [2024-07-22 18:22:55.697994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:43.868 [2024-07-22 18:22:55.700140] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.868 [2024-07-22 18:22:55.700314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.868 [2024-07-22 18:22:55.700544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.868 [2024-07-22 18:22:55.700808] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.868 [2024-07-22 18:22:55.700950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.868 [2024-07-22 18:22:55.701106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.868 [2024-07-22 18:22:55.701365] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.868 [2024-07-22 18:22:55.701576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.868 [2024-07-22 18:22:55.701759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.868 [2024-07-22 18:22:55.701981] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.868 [2024-07-22 18:22:55.702007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.868 [2024-07-22 18:22:55.702026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.868 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:43.868 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:43.868 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:43.868 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:43.868 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:43.868 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:43.868 18:22:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.868 18:22:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:44.126 18:22:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.126 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:44.126 18:22:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:44.126 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:44.385 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:44.385 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:44.385 18:22:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.06 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.06 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.06 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.06 2 00:15:56.614 remove_attach_helper took 45.06s to complete (handling 2 nvme drive(s)) 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:56.614 18:23:08 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74000 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74000 ']' 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74000 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74000 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74000' 00:15:56.614 killing process with pid 74000 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74000 00:15:56.614 18:23:08 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74000 00:15:59.142 18:23:10 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:59.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:59.399 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:59.399 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:59.689 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:59.689 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:59.689 00:15:59.689 real 2m31.832s 00:15:59.689 user 1m51.954s 00:15:59.689 sys 0m19.680s 00:15:59.689 18:23:11 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.689 18:23:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:59.689 ************************************ 00:15:59.689 END TEST sw_hotplug 00:15:59.689 ************************************ 00:15:59.689 18:23:11 -- common/autotest_common.sh@1142 -- # return 0 00:15:59.689 18:23:11 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:15:59.689 18:23:11 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:59.689 18:23:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:59.689 18:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.689 18:23:11 -- common/autotest_common.sh@10 -- # set +x 00:15:59.689 ************************************ 00:15:59.689 START TEST nvme_xnvme 00:15:59.689 ************************************ 00:15:59.689 18:23:11 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:59.962 * Looking for test storage... 00:15:59.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:59.962 18:23:11 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.962 18:23:11 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.962 18:23:11 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.962 18:23:11 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.962 18:23:11 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.962 18:23:11 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.962 18:23:11 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.962 18:23:11 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:59.962 18:23:11 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.962 18:23:11 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:15:59.962 18:23:11 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:59.962 18:23:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.962 18:23:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.962 ************************************ 00:15:59.962 START TEST xnvme_to_malloc_dd_copy 00:15:59.962 ************************************ 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:59.962 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:59.963 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:59.963 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:59.963 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:59.963 18:23:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:59.963 { 00:15:59.963 "subsystems": [ 00:15:59.963 { 00:15:59.963 "subsystem": "bdev", 00:15:59.963 "config": [ 00:15:59.963 { 00:15:59.963 "params": { 00:15:59.963 "block_size": 512, 00:15:59.963 "num_blocks": 2097152, 00:15:59.963 "name": "malloc0" 00:15:59.963 }, 00:15:59.963 "method": "bdev_malloc_create" 00:15:59.963 }, 00:15:59.963 { 00:15:59.963 "params": { 00:15:59.963 "io_mechanism": "libaio", 00:15:59.963 "filename": "/dev/nullb0", 00:15:59.963 "name": "null0" 00:15:59.963 }, 00:15:59.963 "method": "bdev_xnvme_create" 00:15:59.963 }, 00:15:59.963 { 00:15:59.963 "method": "bdev_wait_for_examine" 00:15:59.963 } 00:15:59.963 ] 00:15:59.963 } 00:15:59.963 ] 00:15:59.963 } 00:15:59.963 [2024-07-22 18:23:11.865646] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:59.963 [2024-07-22 18:23:11.865843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75339 ] 00:16:00.221 [2024-07-22 18:23:12.043568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.479 [2024-07-22 18:23:12.326007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.325  Copying: 164/1024 [MB] (164 MBps) Copying: 327/1024 [MB] (163 MBps) Copying: 490/1024 [MB] (163 MBps) Copying: 654/1024 [MB] (163 MBps) Copying: 817/1024 [MB] (162 MBps) Copying: 986/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:16:12.325 00:16:12.325 18:23:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:12.325 18:23:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:12.325 18:23:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:12.325 18:23:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:12.325 { 00:16:12.325 "subsystems": [ 00:16:12.325 { 00:16:12.325 "subsystem": "bdev", 00:16:12.325 "config": [ 00:16:12.325 { 00:16:12.325 "params": { 00:16:12.325 "block_size": 512, 00:16:12.325 "num_blocks": 2097152, 00:16:12.325 "name": "malloc0" 00:16:12.325 }, 00:16:12.325 "method": "bdev_malloc_create" 00:16:12.325 }, 00:16:12.325 { 00:16:12.325 "params": { 00:16:12.325 "io_mechanism": "libaio", 00:16:12.325 "filename": "/dev/nullb0", 00:16:12.325 "name": "null0" 00:16:12.325 }, 00:16:12.325 "method": "bdev_xnvme_create" 00:16:12.325 }, 00:16:12.325 { 00:16:12.325 "method": "bdev_wait_for_examine" 00:16:12.325 } 00:16:12.325 ] 00:16:12.325 } 00:16:12.325 ] 00:16:12.325 } 00:16:12.325 [2024-07-22 18:23:23.855025] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:12.325 [2024-07-22 18:23:23.855207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75474 ] 00:16:12.325 [2024-07-22 18:23:24.032157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.325 [2024-07-22 18:23:24.278037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.188  Copying: 172/1024 [MB] (172 MBps) Copying: 344/1024 [MB] (171 MBps) Copying: 510/1024 [MB] (165 MBps) Copying: 678/1024 [MB] (168 MBps) Copying: 845/1024 [MB] (167 MBps) Copying: 1013/1024 [MB] (167 MBps) Copying: 1024/1024 [MB] (average 168 MBps) 00:16:24.188 00:16:24.188 18:23:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:24.188 18:23:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:24.188 18:23:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:24.188 18:23:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:24.188 18:23:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:24.188 18:23:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:24.188 { 00:16:24.188 "subsystems": [ 00:16:24.188 { 00:16:24.188 "subsystem": "bdev", 00:16:24.188 "config": [ 00:16:24.188 { 00:16:24.188 "params": { 00:16:24.188 "block_size": 512, 00:16:24.188 "num_blocks": 2097152, 00:16:24.188 "name": "malloc0" 00:16:24.188 }, 00:16:24.188 "method": "bdev_malloc_create" 00:16:24.188 }, 00:16:24.188 { 00:16:24.188 "params": { 00:16:24.188 "io_mechanism": "io_uring", 00:16:24.188 "filename": "/dev/nullb0", 00:16:24.188 "name": "null0" 00:16:24.188 }, 00:16:24.188 "method": "bdev_xnvme_create" 00:16:24.188 }, 00:16:24.188 { 00:16:24.188 "method": "bdev_wait_for_examine" 00:16:24.188 } 00:16:24.188 ] 00:16:24.188 } 00:16:24.188 ] 00:16:24.188 } 00:16:24.188 [2024-07-22 18:23:35.616869] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:24.188 [2024-07-22 18:23:35.617074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75600 ] 00:16:24.188 [2024-07-22 18:23:35.792039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.188 [2024-07-22 18:23:36.027466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.283  Copying: 183/1024 [MB] (183 MBps) Copying: 364/1024 [MB] (181 MBps) Copying: 545/1024 [MB] (180 MBps) Copying: 727/1024 [MB] (182 MBps) Copying: 908/1024 [MB] (181 MBps) Copying: 1024/1024 [MB] (average 181 MBps) 00:16:35.283 00:16:35.283 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:35.283 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:35.283 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:35.284 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:35.284 { 00:16:35.284 "subsystems": [ 00:16:35.284 { 00:16:35.284 "subsystem": "bdev", 00:16:35.284 "config": [ 00:16:35.284 { 00:16:35.284 "params": { 00:16:35.284 "block_size": 512, 00:16:35.284 "num_blocks": 2097152, 00:16:35.284 "name": "malloc0" 00:16:35.284 }, 00:16:35.284 "method": "bdev_malloc_create" 00:16:35.284 }, 00:16:35.284 { 00:16:35.284 "params": { 00:16:35.284 "io_mechanism": "io_uring", 00:16:35.284 "filename": "/dev/nullb0", 00:16:35.284 "name": "null0" 00:16:35.284 }, 00:16:35.284 "method": "bdev_xnvme_create" 00:16:35.284 }, 00:16:35.284 { 00:16:35.284 "method": "bdev_wait_for_examine" 00:16:35.284 } 00:16:35.284 ] 00:16:35.284 } 00:16:35.284 ] 00:16:35.284 } 00:16:35.284 [2024-07-22 18:23:46.862924] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:35.284 [2024-07-22 18:23:46.863096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75722 ] 00:16:35.284 [2024-07-22 18:23:47.040033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.284 [2024-07-22 18:23:47.281725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.489  Copying: 183/1024 [MB] (183 MBps) Copying: 365/1024 [MB] (182 MBps) Copying: 549/1024 [MB] (183 MBps) Copying: 732/1024 [MB] (182 MBps) Copying: 915/1024 [MB] (183 MBps) Copying: 1024/1024 [MB] (average 183 MBps) 00:16:46.489 00:16:46.489 18:23:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:46.489 18:23:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:46.489 00:16:46.489 real 0m46.233s 00:16:46.489 user 0m40.174s 00:16:46.489 sys 0m5.427s 00:16:46.489 18:23:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.489 18:23:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:46.490 ************************************ 00:16:46.490 END TEST xnvme_to_malloc_dd_copy 00:16:46.490 ************************************ 00:16:46.490 18:23:58 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:46.490 18:23:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:46.490 18:23:58 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:46.490 18:23:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.490 18:23:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.490 ************************************ 00:16:46.490 START TEST xnvme_bdevperf 00:16:46.490 ************************************ 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:46.490 18:23:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:46.490 { 00:16:46.490 "subsystems": [ 00:16:46.490 { 00:16:46.490 "subsystem": "bdev", 00:16:46.490 "config": [ 00:16:46.490 { 00:16:46.490 "params": { 00:16:46.490 "io_mechanism": "libaio", 00:16:46.490 "filename": "/dev/nullb0", 00:16:46.490 "name": "null0" 00:16:46.490 }, 00:16:46.490 "method": "bdev_xnvme_create" 00:16:46.490 }, 00:16:46.490 { 00:16:46.490 "method": "bdev_wait_for_examine" 00:16:46.490 } 00:16:46.490 ] 00:16:46.490 } 00:16:46.490 ] 00:16:46.490 } 00:16:46.490 [2024-07-22 18:23:58.138735] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:46.490 [2024-07-22 18:23:58.138939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75871 ] 00:16:46.490 [2024-07-22 18:23:58.303733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.748 [2024-07-22 18:23:58.547177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.007 Running I/O for 5 seconds... 00:16:52.323 00:16:52.323 Latency(us) 00:16:52.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.324 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:52.324 null0 : 5.00 115928.14 452.84 0.00 0.00 548.65 168.49 1347.96 00:16:52.324 =================================================================================================================== 00:16:52.324 Total : 115928.14 452.84 0.00 0.00 548.65 168.49 1347.96 00:16:53.296 18:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:53.296 18:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:53.296 18:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:53.296 18:24:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:53.296 18:24:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:53.296 18:24:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:53.296 { 00:16:53.296 "subsystems": [ 00:16:53.296 { 00:16:53.296 "subsystem": "bdev", 00:16:53.296 "config": [ 00:16:53.296 { 00:16:53.296 "params": { 00:16:53.296 "io_mechanism": "io_uring", 00:16:53.296 "filename": "/dev/nullb0", 00:16:53.296 "name": "null0" 00:16:53.296 }, 00:16:53.297 "method": "bdev_xnvme_create" 00:16:53.297 }, 00:16:53.297 { 00:16:53.297 "method": "bdev_wait_for_examine" 00:16:53.297 } 00:16:53.297 ] 00:16:53.297 } 00:16:53.297 ] 00:16:53.297 } 00:16:53.297 [2024-07-22 18:24:05.222004] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:53.297 [2024-07-22 18:24:05.222221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75953 ] 00:16:53.555 [2024-07-22 18:24:05.397174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.813 [2024-07-22 18:24:05.639201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.071 Running I/O for 5 seconds... 00:16:59.338 00:16:59.338 Latency(us) 00:16:59.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.338 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:59.338 null0 : 5.00 157038.27 613.43 0.00 0.00 404.32 240.17 636.74 00:16:59.338 =================================================================================================================== 00:16:59.338 Total : 157038.27 613.43 0.00 0.00 404.32 240.17 636.74 00:17:00.274 18:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:17:00.274 18:24:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:17:00.274 00:17:00.274 real 0m14.164s 00:17:00.274 user 0m11.008s 00:17:00.274 sys 0m2.929s 00:17:00.274 18:24:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.274 18:24:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:00.274 ************************************ 00:17:00.274 END TEST xnvme_bdevperf 00:17:00.274 ************************************ 00:17:00.274 18:24:12 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:00.274 00:17:00.274 real 1m0.588s 00:17:00.274 user 0m51.254s 00:17:00.274 sys 0m8.463s 00:17:00.274 18:24:12 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.274 18:24:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.274 ************************************ 00:17:00.274 END TEST nvme_xnvme 00:17:00.274 ************************************ 00:17:00.274 18:24:12 -- common/autotest_common.sh@1142 -- # return 0 00:17:00.274 18:24:12 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:00.274 18:24:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.274 18:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.274 18:24:12 -- common/autotest_common.sh@10 -- # set +x 00:17:00.274 ************************************ 00:17:00.274 START TEST blockdev_xnvme 00:17:00.274 ************************************ 00:17:00.274 18:24:12 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:00.534 * Looking for test storage... 00:17:00.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76093 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76093 00:17:00.534 18:24:12 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76093 ']' 00:17:00.534 18:24:12 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:00.534 18:24:12 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.534 18:24:12 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.534 18:24:12 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.534 18:24:12 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.534 18:24:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.534 [2024-07-22 18:24:12.490451] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:00.534 [2024-07-22 18:24:12.490651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76093 ] 00:17:00.793 [2024-07-22 18:24:12.664804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.051 [2024-07-22 18:24:12.904519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.988 18:24:13 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.988 18:24:13 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:17:01.988 18:24:13 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:01.988 18:24:13 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:17:01.988 18:24:13 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:01.988 18:24:13 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:01.988 18:24:13 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:01.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.246 Waiting for block devices as requested 00:17:02.246 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.504 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.504 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.504 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.773 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:17:07.773 nvme0n1 00:17:07.773 nvme1n1 00:17:07.773 nvme2n1 00:17:07.773 nvme2n2 00:17:07.773 nvme2n3 00:17:07.773 nvme3n1 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a5c46bdf-fcce-45bd-a3ad-6d40d353d828"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a5c46bdf-fcce-45bd-a3ad-6d40d353d828",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b5d84d06-b8f2-42cb-aec6-5468ba3cedeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b5d84d06-b8f2-42cb-aec6-5468ba3cedeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c1eee288-171f-4347-8c35-bd55d97f0f80"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1eee288-171f-4347-8c35-bd55d97f0f80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "3dceee94-1d4d-47bf-8064-4f5ee4694854"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3dceee94-1d4d-47bf-8064-4f5ee4694854",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "855a0d07-8de0-496e-9fc6-daffd6abebb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "855a0d07-8de0-496e-9fc6-daffd6abebb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "270757db-97c0-48ad-b74a-ea95c026f4f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "270757db-97c0-48ad-b74a-ea95c026f4f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:07.773 18:24:19 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 76093 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76093 ']' 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76093 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.773 18:24:19 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76093 00:17:08.031 18:24:19 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.031 18:24:19 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.031 killing process with pid 76093 00:17:08.031 18:24:19 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76093' 00:17:08.031 18:24:19 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76093 00:17:08.031 18:24:19 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76093 00:17:10.560 18:24:22 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:10.560 18:24:22 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:10.561 18:24:22 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:10.561 18:24:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.561 18:24:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:10.561 ************************************ 00:17:10.561 START TEST bdev_hello_world 00:17:10.561 ************************************ 00:17:10.561 18:24:22 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:10.561 [2024-07-22 18:24:22.103736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:10.561 [2024-07-22 18:24:22.103884] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76472 ] 00:17:10.561 [2024-07-22 18:24:22.267927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.561 [2024-07-22 18:24:22.504429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.127 [2024-07-22 18:24:22.930713] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:11.127 [2024-07-22 18:24:22.930766] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:11.127 [2024-07-22 18:24:22.930794] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:11.127 [2024-07-22 18:24:22.933091] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:11.127 [2024-07-22 18:24:22.933443] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:11.127 [2024-07-22 18:24:22.933479] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:11.127 [2024-07-22 18:24:22.933767] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:11.127 00:17:11.127 [2024-07-22 18:24:22.933808] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:12.505 00:17:12.505 real 0m2.090s 00:17:12.505 user 0m1.728s 00:17:12.505 sys 0m0.247s 00:17:12.505 18:24:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.505 18:24:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:12.505 ************************************ 00:17:12.505 END TEST bdev_hello_world 00:17:12.505 ************************************ 00:17:12.505 18:24:24 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:12.505 18:24:24 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:12.505 18:24:24 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:12.505 18:24:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.505 18:24:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.505 ************************************ 00:17:12.505 START TEST bdev_bounds 00:17:12.505 ************************************ 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=76510 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:12.505 Process bdevio pid: 76510 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 76510' 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 76510 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 76510 ']' 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.505 18:24:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:12.505 [2024-07-22 18:24:24.255308] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:12.505 [2024-07-22 18:24:24.255507] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76510 ] 00:17:12.505 [2024-07-22 18:24:24.430020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.763 [2024-07-22 18:24:24.673783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.763 [2024-07-22 18:24:24.673880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.763 [2024-07-22 18:24:24.673884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.330 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.330 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:17:13.330 18:24:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:13.330 I/O targets: 00:17:13.330 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:13.330 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:13.330 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:13.330 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:13.330 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:13.330 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:13.330 00:17:13.330 00:17:13.330 CUnit - A unit testing framework for C - Version 2.1-3 00:17:13.330 http://cunit.sourceforge.net/ 00:17:13.330 00:17:13.330 00:17:13.330 Suite: bdevio tests on: nvme3n1 00:17:13.330 Test: blockdev write read block ...passed 00:17:13.330 Test: blockdev write zeroes read block ...passed 00:17:13.330 Test: blockdev write zeroes read no split ...passed 00:17:13.330 Test: blockdev write zeroes read split ...passed 00:17:13.588 Test: blockdev write zeroes read split partial ...passed 00:17:13.588 Test: blockdev reset ...passed 00:17:13.589 Test: blockdev write read 8 blocks ...passed 00:17:13.589 Test: blockdev write read size > 128k ...passed 00:17:13.589 Test: blockdev write read invalid size ...passed 00:17:13.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:13.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:13.589 Test: blockdev write read max offset ...passed 00:17:13.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:13.589 Test: blockdev writev readv 8 blocks ...passed 00:17:13.589 Test: blockdev writev readv 30 x 1block ...passed 00:17:13.589 Test: blockdev writev readv block ...passed 00:17:13.589 Test: blockdev writev readv size > 128k ...passed 00:17:13.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:13.589 Test: blockdev comparev and writev ...passed 00:17:13.589 Test: blockdev nvme passthru rw ...passed 00:17:13.589 Test: blockdev nvme passthru vendor specific ...passed 00:17:13.589 Test: blockdev nvme admin passthru ...passed 00:17:13.589 Test: blockdev copy ...passed 00:17:13.589 Suite: bdevio tests on: nvme2n3 00:17:13.589 Test: blockdev write read block ...passed 00:17:13.589 Test: blockdev write zeroes read block ...passed 00:17:13.589 Test: blockdev write zeroes read no split ...passed 00:17:13.589 Test: blockdev write zeroes read split ...passed 00:17:13.589 Test: blockdev write zeroes read split partial ...passed 00:17:13.589 Test: blockdev reset ...passed 00:17:13.589 Test: blockdev write read 8 blocks ...passed 00:17:13.589 Test: blockdev write read size > 128k ...passed 00:17:13.589 Test: blockdev write read invalid size ...passed 00:17:13.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:13.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:13.589 Test: blockdev write read max offset ...passed 00:17:13.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:13.589 Test: blockdev writev readv 8 blocks ...passed 00:17:13.589 Test: blockdev writev readv 30 x 1block ...passed 00:17:13.589 Test: blockdev writev readv block ...passed 00:17:13.589 Test: blockdev writev readv size > 128k ...passed 00:17:13.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:13.589 Test: blockdev comparev and writev ...passed 00:17:13.589 Test: blockdev nvme passthru rw ...passed 00:17:13.589 Test: blockdev nvme passthru vendor specific ...passed 00:17:13.589 Test: blockdev nvme admin passthru ...passed 00:17:13.589 Test: blockdev copy ...passed 00:17:13.589 Suite: bdevio tests on: nvme2n2 00:17:13.589 Test: blockdev write read block ...passed 00:17:13.589 Test: blockdev write zeroes read block ...passed 00:17:13.589 Test: blockdev write zeroes read no split ...passed 00:17:13.589 Test: blockdev write zeroes read split ...passed 00:17:13.589 Test: blockdev write zeroes read split partial ...passed 00:17:13.589 Test: blockdev reset ...passed 00:17:13.589 Test: blockdev write read 8 blocks ...passed 00:17:13.589 Test: blockdev write read size > 128k ...passed 00:17:13.589 Test: blockdev write read invalid size ...passed 00:17:13.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:13.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:13.589 Test: blockdev write read max offset ...passed 00:17:13.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:13.589 Test: blockdev writev readv 8 blocks ...passed 00:17:13.589 Test: blockdev writev readv 30 x 1block ...passed 00:17:13.589 Test: blockdev writev readv block ...passed 00:17:13.589 Test: blockdev writev readv size > 128k ...passed 00:17:13.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:13.589 Test: blockdev comparev and writev ...passed 00:17:13.589 Test: blockdev nvme passthru rw ...passed 00:17:13.589 Test: blockdev nvme passthru vendor specific ...passed 00:17:13.589 Test: blockdev nvme admin passthru ...passed 00:17:13.589 Test: blockdev copy ...passed 00:17:13.589 Suite: bdevio tests on: nvme2n1 00:17:13.589 Test: blockdev write read block ...passed 00:17:13.589 Test: blockdev write zeroes read block ...passed 00:17:13.589 Test: blockdev write zeroes read no split ...passed 00:17:13.589 Test: blockdev write zeroes read split ...passed 00:17:13.589 Test: blockdev write zeroes read split partial ...passed 00:17:13.589 Test: blockdev reset ...passed 00:17:13.589 Test: blockdev write read 8 blocks ...passed 00:17:13.589 Test: blockdev write read size > 128k ...passed 00:17:13.589 Test: blockdev write read invalid size ...passed 00:17:13.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:13.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:13.589 Test: blockdev write read max offset ...passed 00:17:13.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:13.589 Test: blockdev writev readv 8 blocks ...passed 00:17:13.589 Test: blockdev writev readv 30 x 1block ...passed 00:17:13.589 Test: blockdev writev readv block ...passed 00:17:13.589 Test: blockdev writev readv size > 128k ...passed 00:17:13.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:13.589 Test: blockdev comparev and writev ...passed 00:17:13.589 Test: blockdev nvme passthru rw ...passed 00:17:13.589 Test: blockdev nvme passthru vendor specific ...passed 00:17:13.589 Test: blockdev nvme admin passthru ...passed 00:17:13.589 Test: blockdev copy ...passed 00:17:13.589 Suite: bdevio tests on: nvme1n1 00:17:13.589 Test: blockdev write read block ...passed 00:17:13.589 Test: blockdev write zeroes read block ...passed 00:17:13.589 Test: blockdev write zeroes read no split ...passed 00:17:13.589 Test: blockdev write zeroes read split ...passed 00:17:13.848 Test: blockdev write zeroes read split partial ...passed 00:17:13.848 Test: blockdev reset ...passed 00:17:13.848 Test: blockdev write read 8 blocks ...passed 00:17:13.848 Test: blockdev write read size > 128k ...passed 00:17:13.848 Test: blockdev write read invalid size ...passed 00:17:13.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:13.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:13.848 Test: blockdev write read max offset ...passed 00:17:13.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:13.848 Test: blockdev writev readv 8 blocks ...passed 00:17:13.848 Test: blockdev writev readv 30 x 1block ...passed 00:17:13.848 Test: blockdev writev readv block ...passed 00:17:13.848 Test: blockdev writev readv size > 128k ...passed 00:17:13.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:13.848 Test: blockdev comparev and writev ...passed 00:17:13.848 Test: blockdev nvme passthru rw ...passed 00:17:13.848 Test: blockdev nvme passthru vendor specific ...passed 00:17:13.848 Test: blockdev nvme admin passthru ...passed 00:17:13.848 Test: blockdev copy ...passed 00:17:13.848 Suite: bdevio tests on: nvme0n1 00:17:13.848 Test: blockdev write read block ...passed 00:17:13.848 Test: blockdev write zeroes read block ...passed 00:17:13.848 Test: blockdev write zeroes read no split ...passed 00:17:13.848 Test: blockdev write zeroes read split ...passed 00:17:13.848 Test: blockdev write zeroes read split partial ...passed 00:17:13.848 Test: blockdev reset ...passed 00:17:13.848 Test: blockdev write read 8 blocks ...passed 00:17:13.848 Test: blockdev write read size > 128k ...passed 00:17:13.848 Test: blockdev write read invalid size ...passed 00:17:13.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:13.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:13.848 Test: blockdev write read max offset ...passed 00:17:13.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:13.848 Test: blockdev writev readv 8 blocks ...passed 00:17:13.848 Test: blockdev writev readv 30 x 1block ...passed 00:17:13.848 Test: blockdev writev readv block ...passed 00:17:13.848 Test: blockdev writev readv size > 128k ...passed 00:17:13.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:13.848 Test: blockdev comparev and writev ...passed 00:17:13.848 Test: blockdev nvme passthru rw ...passed 00:17:13.848 Test: blockdev nvme passthru vendor specific ...passed 00:17:13.848 Test: blockdev nvme admin passthru ...passed 00:17:13.848 Test: blockdev copy ...passed 00:17:13.848 00:17:13.848 Run Summary: Type Total Ran Passed Failed Inactive 00:17:13.848 suites 6 6 n/a 0 0 00:17:13.848 tests 138 138 138 0 0 00:17:13.848 asserts 780 780 780 0 n/a 00:17:13.848 00:17:13.848 Elapsed time = 1.101 seconds 00:17:13.848 0 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 76510 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 76510 ']' 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 76510 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76510 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.848 killing process with pid 76510 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76510' 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 76510 00:17:13.848 18:24:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 76510 00:17:15.224 18:24:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:15.224 00:17:15.224 real 0m2.755s 00:17:15.224 user 0m6.393s 00:17:15.224 sys 0m0.410s 00:17:15.224 18:24:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:15.224 18:24:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:15.224 ************************************ 00:17:15.224 END TEST bdev_bounds 00:17:15.224 ************************************ 00:17:15.224 18:24:26 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:15.224 18:24:26 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:15.224 18:24:26 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:15.224 18:24:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.224 18:24:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.224 ************************************ 00:17:15.224 START TEST bdev_nbd 00:17:15.224 ************************************ 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=76576 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 76576 /var/tmp/spdk-nbd.sock 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76576 ']' 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.224 18:24:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:15.224 [2024-07-22 18:24:27.052850] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:15.224 [2024-07-22 18:24:27.053003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.224 [2024-07-22 18:24:27.220387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.483 [2024-07-22 18:24:27.468124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:16.050 18:24:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:16.050 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.330 1+0 records in 00:17:16.330 1+0 records out 00:17:16.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489091 s, 8.4 MB/s 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:16.330 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.617 1+0 records in 00:17:16.617 1+0 records out 00:17:16.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630526 s, 6.5 MB/s 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:16.617 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.875 1+0 records in 00:17:16.875 1+0 records out 00:17:16.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500089 s, 8.2 MB/s 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:16.875 18:24:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.133 1+0 records in 00:17:17.133 1+0 records out 00:17:17.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792373 s, 5.2 MB/s 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:17.133 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.392 1+0 records in 00:17:17.392 1+0 records out 00:17:17.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532126 s, 7.7 MB/s 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:17.392 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:17.393 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.651 1+0 records in 00:17:17.651 1+0 records out 00:17:17.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615527 s, 6.7 MB/s 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:17.651 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:17.910 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd0", 00:17:17.910 "bdev_name": "nvme0n1" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd1", 00:17:17.910 "bdev_name": "nvme1n1" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd2", 00:17:17.910 "bdev_name": "nvme2n1" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd3", 00:17:17.910 "bdev_name": "nvme2n2" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd4", 00:17:17.910 "bdev_name": "nvme2n3" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd5", 00:17:17.910 "bdev_name": "nvme3n1" 00:17:17.910 } 00:17:17.910 ]' 00:17:17.910 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:17.910 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd0", 00:17:17.910 "bdev_name": "nvme0n1" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd1", 00:17:17.910 "bdev_name": "nvme1n1" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd2", 00:17:17.910 "bdev_name": "nvme2n1" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd3", 00:17:17.910 "bdev_name": "nvme2n2" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd4", 00:17:17.910 "bdev_name": "nvme2n3" 00:17:17.910 }, 00:17:17.910 { 00:17:17.910 "nbd_device": "/dev/nbd5", 00:17:17.910 "bdev_name": "nvme3n1" 00:17:17.910 } 00:17:17.910 ]' 00:17:17.910 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:18.168 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:18.168 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.168 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:18.168 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.168 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:18.168 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.168 18:24:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.427 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.687 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.945 18:24:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.203 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:19.461 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:19.461 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:19.461 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.462 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:19.720 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:19.979 /dev/nbd0 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.979 1+0 records in 00:17:19.979 1+0 records out 00:17:19.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539031 s, 7.6 MB/s 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:19.979 18:24:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:17:20.238 /dev/nbd1 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.238 1+0 records in 00:17:20.238 1+0 records out 00:17:20.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419776 s, 9.8 MB/s 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:20.238 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:17:20.805 /dev/nbd10 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.805 1+0 records in 00:17:20.805 1+0 records out 00:17:20.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509159 s, 8.0 MB/s 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:20.805 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:17:20.805 /dev/nbd11 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.064 1+0 records in 00:17:21.064 1+0 records out 00:17:21.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488582 s, 8.4 MB/s 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:21.064 18:24:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:17:21.322 /dev/nbd12 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.322 1+0 records in 00:17:21.322 1+0 records out 00:17:21.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636768 s, 6.4 MB/s 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:21.322 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:21.581 /dev/nbd13 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.581 1+0 records in 00:17:21.581 1+0 records out 00:17:21.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112889 s, 3.6 MB/s 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.581 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd0", 00:17:21.839 "bdev_name": "nvme0n1" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd1", 00:17:21.839 "bdev_name": "nvme1n1" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd10", 00:17:21.839 "bdev_name": "nvme2n1" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd11", 00:17:21.839 "bdev_name": "nvme2n2" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd12", 00:17:21.839 "bdev_name": "nvme2n3" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd13", 00:17:21.839 "bdev_name": "nvme3n1" 00:17:21.839 } 00:17:21.839 ]' 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd0", 00:17:21.839 "bdev_name": "nvme0n1" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd1", 00:17:21.839 "bdev_name": "nvme1n1" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd10", 00:17:21.839 "bdev_name": "nvme2n1" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd11", 00:17:21.839 "bdev_name": "nvme2n2" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd12", 00:17:21.839 "bdev_name": "nvme2n3" 00:17:21.839 }, 00:17:21.839 { 00:17:21.839 "nbd_device": "/dev/nbd13", 00:17:21.839 "bdev_name": "nvme3n1" 00:17:21.839 } 00:17:21.839 ]' 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:21.839 /dev/nbd1 00:17:21.839 /dev/nbd10 00:17:21.839 /dev/nbd11 00:17:21.839 /dev/nbd12 00:17:21.839 /dev/nbd13' 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:21.839 /dev/nbd1 00:17:21.839 /dev/nbd10 00:17:21.839 /dev/nbd11 00:17:21.839 /dev/nbd12 00:17:21.839 /dev/nbd13' 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:21.839 256+0 records in 00:17:21.839 256+0 records out 00:17:21.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573682 s, 183 MB/s 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:21.839 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:22.099 256+0 records in 00:17:22.099 256+0 records out 00:17:22.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133275 s, 7.9 MB/s 00:17:22.099 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:22.099 18:24:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:22.099 256+0 records in 00:17:22.099 256+0 records out 00:17:22.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171027 s, 6.1 MB/s 00:17:22.099 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:22.099 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:22.357 256+0 records in 00:17:22.357 256+0 records out 00:17:22.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159301 s, 6.6 MB/s 00:17:22.357 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:22.358 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:22.616 256+0 records in 00:17:22.616 256+0 records out 00:17:22.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159284 s, 6.6 MB/s 00:17:22.616 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:22.616 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:22.616 256+0 records in 00:17:22.616 256+0 records out 00:17:22.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126086 s, 8.3 MB/s 00:17:22.616 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:22.616 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:22.875 256+0 records in 00:17:22.875 256+0 records out 00:17:22.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143779 s, 7.3 MB/s 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.875 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:23.134 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.134 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.134 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.134 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.134 18:24:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.134 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.134 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.134 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.134 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.134 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.392 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.653 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.915 18:24:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.173 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:24.432 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:17:24.690 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:24.948 malloc_lvol_verify 00:17:24.948 18:24:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:25.206 e882b660-0e69-4d41-97f0-eeb689ef9be1 00:17:25.206 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:25.464 6e585227-0545-4c65-8cbd-558e0b966a22 00:17:25.464 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:25.722 /dev/nbd0 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:17:25.722 mke2fs 1.46.5 (30-Dec-2021) 00:17:25.722 Discarding device blocks: 0/4096 done 00:17:25.722 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:25.722 00:17:25.722 Allocating group tables: 0/1 done 00:17:25.722 Writing inode tables: 0/1 done 00:17:25.722 Creating journal (1024 blocks): done 00:17:25.722 Writing superblocks and filesystem accounting information: 0/1 done 00:17:25.722 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.722 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 76576 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76576 ']' 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76576 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76576 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:25.981 killing process with pid 76576 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76576' 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76576 00:17:25.981 18:24:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76576 00:17:27.409 18:24:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:27.409 00:17:27.409 real 0m12.212s 00:17:27.409 user 0m17.046s 00:17:27.409 sys 0m4.037s 00:17:27.409 18:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.409 18:24:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:27.409 ************************************ 00:17:27.409 END TEST bdev_nbd 00:17:27.409 ************************************ 00:17:27.409 18:24:39 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:27.409 18:24:39 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:27.409 18:24:39 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:17:27.409 18:24:39 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:17:27.409 18:24:39 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:27.409 18:24:39 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:27.409 18:24:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.409 18:24:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:27.409 ************************************ 00:17:27.409 START TEST bdev_fio 00:17:27.409 ************************************ 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:27.409 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:17:27.409 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:27.410 ************************************ 00:17:27.410 START TEST bdev_fio_rw_verify 00:17:27.410 ************************************ 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:27.410 18:24:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:27.668 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:27.668 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:27.668 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:27.668 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:27.668 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:27.668 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:27.668 fio-3.35 00:17:27.668 Starting 6 threads 00:17:39.898 00:17:39.898 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=76998: Mon Jul 22 18:24:50 2024 00:17:39.898 read: IOPS=29.9k, BW=117MiB/s (122MB/s)(1168MiB/10001msec) 00:17:39.898 slat (usec): min=3, max=1323, avg= 7.24, stdev= 5.36 00:17:39.898 clat (usec): min=97, max=6760, avg=615.16, stdev=233.84 00:17:39.898 lat (usec): min=101, max=6767, avg=622.40, stdev=234.57 00:17:39.898 clat percentiles (usec): 00:17:39.898 | 50.000th=[ 635], 99.000th=[ 1205], 99.900th=[ 1811], 99.990th=[ 4015], 00:17:39.898 | 99.999th=[ 6718] 00:17:39.898 write: IOPS=30.3k, BW=118MiB/s (124MB/s)(1184MiB/10001msec); 0 zone resets 00:17:39.898 slat (usec): min=13, max=3950, avg=27.38, stdev=30.92 00:17:39.898 clat (usec): min=70, max=8171, avg=694.76, stdev=237.30 00:17:39.898 lat (usec): min=93, max=8206, avg=722.14, stdev=240.34 00:17:39.898 clat percentiles (usec): 00:17:39.898 | 50.000th=[ 701], 99.000th=[ 1319], 99.900th=[ 1844], 99.990th=[ 2606], 00:17:39.898 | 99.999th=[ 4883] 00:17:39.898 bw ( KiB/s): min=98319, max=147062, per=99.83%, avg=120982.16, stdev=2775.00, samples=114 00:17:39.898 iops : min=24579, max=36765, avg=30245.37, stdev=693.73, samples=114 00:17:39.898 lat (usec) : 100=0.01%, 250=3.57%, 500=22.70%, 750=39.18%, 1000=28.83% 00:17:39.898 lat (msec) : 2=5.67%, 4=0.05%, 10=0.01% 00:17:39.898 cpu : usr=60.02%, sys=26.14%, ctx=7871, majf=0, minf=25358 00:17:39.898 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.898 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.898 issued rwts: total=299098,303012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.898 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:39.898 00:17:39.898 Run status group 0 (all jobs): 00:17:39.898 READ: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=1168MiB (1225MB), run=10001-10001msec 00:17:39.898 WRITE: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=1184MiB (1241MB), run=10001-10001msec 00:17:39.898 ----------------------------------------------------- 00:17:39.898 Suppressions used: 00:17:39.898 count bytes template 00:17:39.898 6 48 /usr/src/fio/parse.c 00:17:39.898 3696 354816 /usr/src/fio/iolog.c 00:17:39.898 1 8 libtcmalloc_minimal.so 00:17:39.898 1 904 libcrypto.so 00:17:39.898 ----------------------------------------------------- 00:17:39.898 00:17:39.898 00:17:39.898 real 0m12.437s 00:17:39.898 user 0m37.917s 00:17:39.898 sys 0m16.088s 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:39.898 ************************************ 00:17:39.898 END TEST bdev_fio_rw_verify 00:17:39.898 ************************************ 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:39.898 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a5c46bdf-fcce-45bd-a3ad-6d40d353d828"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a5c46bdf-fcce-45bd-a3ad-6d40d353d828",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b5d84d06-b8f2-42cb-aec6-5468ba3cedeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b5d84d06-b8f2-42cb-aec6-5468ba3cedeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c1eee288-171f-4347-8c35-bd55d97f0f80"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1eee288-171f-4347-8c35-bd55d97f0f80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "3dceee94-1d4d-47bf-8064-4f5ee4694854"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3dceee94-1d4d-47bf-8064-4f5ee4694854",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "855a0d07-8de0-496e-9fc6-daffd6abebb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "855a0d07-8de0-496e-9fc6-daffd6abebb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "270757db-97c0-48ad-b74a-ea95c026f4f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "270757db-97c0-48ad-b74a-ea95c026f4f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:39.899 /home/vagrant/spdk_repo/spdk 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:39.899 00:17:39.899 real 0m12.616s 00:17:39.899 user 0m38.010s 00:17:39.899 sys 0m16.172s 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.899 18:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:39.899 ************************************ 00:17:39.899 END TEST bdev_fio 00:17:39.899 ************************************ 00:17:39.899 18:24:51 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:39.899 18:24:51 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:39.899 18:24:51 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:39.899 18:24:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:39.899 18:24:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.899 18:24:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:39.899 ************************************ 00:17:39.899 START TEST bdev_verify 00:17:39.899 ************************************ 00:17:39.899 18:24:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:40.157 [2024-07-22 18:24:51.970187] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:40.158 [2024-07-22 18:24:51.970359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77168 ] 00:17:40.158 [2024-07-22 18:24:52.137853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:40.415 [2024-07-22 18:24:52.404858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.415 [2024-07-22 18:24:52.404874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.982 Running I/O for 5 seconds... 00:17:46.245 00:17:46.245 Latency(us) 00:17:46.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.245 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.245 Verification LBA range: start 0x0 length 0xa0000 00:17:46.245 nvme0n1 : 5.05 1699.67 6.64 0.00 0.00 75171.20 15073.28 57909.99 00:17:46.245 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.245 Verification LBA range: start 0xa0000 length 0xa0000 00:17:46.246 nvme0n1 : 5.06 1695.56 6.62 0.00 0.00 75352.21 10902.81 64821.06 00:17:46.246 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x0 length 0xbd0bd 00:17:46.246 nvme1n1 : 5.06 2929.44 11.44 0.00 0.00 43381.65 5421.61 61961.31 00:17:46.246 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:46.246 nvme1n1 : 5.06 2950.85 11.53 0.00 0.00 43136.30 3664.06 57433.37 00:17:46.246 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x0 length 0x80000 00:17:46.246 nvme2n1 : 5.06 1719.10 6.72 0.00 0.00 73972.10 14000.87 71017.19 00:17:46.246 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x80000 length 0x80000 00:17:46.246 nvme2n1 : 5.08 1739.78 6.80 0.00 0.00 73026.38 9830.40 62914.56 00:17:46.246 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x0 length 0x80000 00:17:46.246 nvme2n2 : 5.07 1717.68 6.71 0.00 0.00 73916.93 5928.03 66250.94 00:17:46.246 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x80000 length 0x80000 00:17:46.246 nvme2n2 : 5.07 1716.09 6.70 0.00 0.00 73877.85 9115.46 63867.81 00:17:46.246 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x0 length 0x80000 00:17:46.246 nvme2n3 : 5.05 1698.24 6.63 0.00 0.00 74643.72 16443.58 66727.56 00:17:46.246 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x80000 length 0x80000 00:17:46.246 nvme2n3 : 5.07 1717.51 6.71 0.00 0.00 73672.36 11141.12 58386.62 00:17:46.246 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x0 length 0x20000 00:17:46.246 nvme3n1 : 5.07 1716.54 6.71 0.00 0.00 73734.84 4825.83 71017.19 00:17:46.246 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:46.246 Verification LBA range: start 0x20000 length 0x20000 00:17:46.246 nvme3n1 : 5.07 1715.44 6.70 0.00 0.00 73660.22 9830.40 63391.19 00:17:46.246 =================================================================================================================== 00:17:46.246 Total : 23015.91 89.91 0.00 0.00 66222.23 3664.06 71017.19 00:17:47.181 00:17:47.181 real 0m7.306s 00:17:47.181 user 0m11.279s 00:17:47.181 sys 0m1.836s 00:17:47.181 18:24:59 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.181 18:24:59 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:47.181 ************************************ 00:17:47.181 END TEST bdev_verify 00:17:47.181 ************************************ 00:17:47.439 18:24:59 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:47.439 18:24:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:47.439 18:24:59 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:47.439 18:24:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.439 18:24:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:47.439 ************************************ 00:17:47.439 START TEST bdev_verify_big_io 00:17:47.439 ************************************ 00:17:47.439 18:24:59 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:47.439 [2024-07-22 18:24:59.360492] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:47.439 [2024-07-22 18:24:59.360652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77271 ] 00:17:47.698 [2024-07-22 18:24:59.527756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:47.956 [2024-07-22 18:24:59.763566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.956 [2024-07-22 18:24:59.763566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.523 Running I/O for 5 seconds... 00:17:55.108 00:17:55.108 Latency(us) 00:17:55.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.108 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x0 length 0xa000 00:17:55.108 nvme0n1 : 5.88 119.81 7.49 0.00 0.00 1037518.83 160146.15 1998013.91 00:17:55.108 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0xa000 length 0xa000 00:17:55.108 nvme0n1 : 5.93 103.84 6.49 0.00 0.00 1187889.06 6434.44 1593835.52 00:17:55.108 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x0 length 0xbd0b 00:17:55.108 nvme1n1 : 5.90 151.92 9.50 0.00 0.00 789763.06 8043.05 793104.76 00:17:55.108 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:55.108 nvme1n1 : 5.92 162.21 10.14 0.00 0.00 733771.09 112006.98 732096.70 00:17:55.108 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x0 length 0x8000 00:17:55.108 nvme2n1 : 5.90 140.99 8.81 0.00 0.00 847480.30 17515.99 1372681.31 00:17:55.108 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x8000 length 0x8000 00:17:55.108 nvme2n1 : 5.93 161.95 10.12 0.00 0.00 736793.69 28240.06 747348.71 00:17:55.108 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x0 length 0x8000 00:17:55.108 nvme2n2 : 5.90 135.49 8.47 0.00 0.00 867223.78 11021.96 1670095.59 00:17:55.108 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x8000 length 0x8000 00:17:55.108 nvme2n2 : 5.92 129.70 8.11 0.00 0.00 906248.38 16801.05 1113397.06 00:17:55.108 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x0 length 0x8000 00:17:55.108 nvme2n3 : 5.91 151.61 9.48 0.00 0.00 752338.25 16562.73 793104.76 00:17:55.108 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x8000 length 0x8000 00:17:55.108 nvme2n3 : 5.93 151.08 9.44 0.00 0.00 756924.58 20018.27 823608.79 00:17:55.108 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x0 length 0x2000 00:17:55.108 nvme3n1 : 5.91 83.87 5.24 0.00 0.00 1315561.69 17992.61 3080906.94 00:17:55.108 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:55.108 Verification LBA range: start 0x2000 length 0x2000 00:17:55.108 nvme3n1 : 5.92 108.02 6.75 0.00 0.00 1028019.94 9889.98 3126662.98 00:17:55.108 =================================================================================================================== 00:17:55.108 Total : 1600.49 100.03 0.00 0.00 881144.87 6434.44 3126662.98 00:17:56.044 00:17:56.044 real 0m8.541s 00:17:56.044 user 0m15.221s 00:17:56.044 sys 0m0.590s 00:17:56.044 18:25:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:56.044 18:25:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.044 ************************************ 00:17:56.044 END TEST bdev_verify_big_io 00:17:56.044 ************************************ 00:17:56.044 18:25:07 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:56.044 18:25:07 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.044 18:25:07 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:56.044 18:25:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.044 18:25:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:56.044 ************************************ 00:17:56.044 START TEST bdev_write_zeroes 00:17:56.044 ************************************ 00:17:56.044 18:25:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.044 [2024-07-22 18:25:07.916068] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:56.044 [2024-07-22 18:25:07.916231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77388 ] 00:17:56.302 [2024-07-22 18:25:08.083529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.561 [2024-07-22 18:25:08.321214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.820 Running I/O for 1 seconds... 00:17:58.195 00:17:58.195 Latency(us) 00:17:58.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.195 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.195 nvme0n1 : 1.00 11736.10 45.84 0.00 0.00 10893.50 7387.69 24665.37 00:17:58.195 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.195 nvme1n1 : 1.01 14867.10 58.07 0.00 0.00 8580.70 3455.53 15132.86 00:17:58.195 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.195 nvme2n1 : 1.02 11724.29 45.80 0.00 0.00 10849.95 5898.24 22282.24 00:17:58.195 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.195 nvme2n2 : 1.02 11706.42 45.73 0.00 0.00 10852.18 5749.29 21209.83 00:17:58.195 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.195 nvme2n3 : 1.02 11688.34 45.66 0.00 0.00 10859.94 5719.51 21328.99 00:17:58.195 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:58.195 nvme3n1 : 1.02 11670.24 45.59 0.00 0.00 10866.50 5689.72 22043.93 00:17:58.195 =================================================================================================================== 00:17:58.195 Total : 73392.49 286.69 0.00 0.00 10402.13 3455.53 24665.37 00:17:59.132 00:17:59.132 real 0m3.212s 00:17:59.132 user 0m2.466s 00:17:59.132 sys 0m0.576s 00:17:59.132 18:25:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:59.132 18:25:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:59.132 ************************************ 00:17:59.132 END TEST bdev_write_zeroes 00:17:59.132 ************************************ 00:17:59.132 18:25:11 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:59.132 18:25:11 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.132 18:25:11 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:59.132 18:25:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.132 18:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:59.132 ************************************ 00:17:59.132 START TEST bdev_json_nonenclosed 00:17:59.132 ************************************ 00:17:59.132 18:25:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.397 [2024-07-22 18:25:11.199314] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:59.397 [2024-07-22 18:25:11.199519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77447 ] 00:17:59.397 [2024-07-22 18:25:11.376442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.657 [2024-07-22 18:25:11.611521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.657 [2024-07-22 18:25:11.611648] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:59.657 [2024-07-22 18:25:11.611692] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:59.657 [2024-07-22 18:25:11.611713] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.224 00:18:00.224 real 0m0.950s 00:18:00.224 user 0m0.686s 00:18:00.224 sys 0m0.157s 00:18:00.224 18:25:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:18:00.224 18:25:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.224 18:25:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:00.224 ************************************ 00:18:00.224 END TEST bdev_json_nonenclosed 00:18:00.224 ************************************ 00:18:00.224 18:25:12 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:18:00.224 18:25:12 blockdev_xnvme -- bdev/blockdev.sh@781 -- # true 00:18:00.224 18:25:12 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:00.224 18:25:12 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:00.224 18:25:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.225 18:25:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:00.225 ************************************ 00:18:00.225 START TEST bdev_json_nonarray 00:18:00.225 ************************************ 00:18:00.225 18:25:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:00.225 [2024-07-22 18:25:12.184815] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:00.225 [2024-07-22 18:25:12.185016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77478 ] 00:18:00.483 [2024-07-22 18:25:12.350334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.741 [2024-07-22 18:25:12.586525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.741 [2024-07-22 18:25:12.586660] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:00.741 [2024-07-22 18:25:12.586708] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:00.741 [2024-07-22 18:25:12.586728] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.999 00:18:00.999 real 0m0.912s 00:18:00.999 user 0m0.675s 00:18:00.999 sys 0m0.131s 00:18:00.999 18:25:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:18:00.999 18:25:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.999 18:25:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:00.999 ************************************ 00:18:00.999 END TEST bdev_json_nonarray 00:18:00.999 ************************************ 00:18:01.257 18:25:13 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@784 -- # true 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:01.257 18:25:13 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:01.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:02.448 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.706 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.706 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.965 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.965 00:18:02.965 real 1m2.525s 00:18:02.965 user 1m44.099s 00:18:02.965 sys 0m26.970s 00:18:02.965 18:25:14 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.965 18:25:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.965 ************************************ 00:18:02.965 END TEST blockdev_xnvme 00:18:02.965 ************************************ 00:18:02.965 18:25:14 -- common/autotest_common.sh@1142 -- # return 0 00:18:02.965 18:25:14 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:02.965 18:25:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:02.965 18:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.965 18:25:14 -- common/autotest_common.sh@10 -- # set +x 00:18:02.965 ************************************ 00:18:02.965 START TEST ublk 00:18:02.965 ************************************ 00:18:02.965 18:25:14 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:02.965 * Looking for test storage... 00:18:02.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:02.965 18:25:14 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:02.965 18:25:14 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:02.965 18:25:14 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:02.965 18:25:14 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:02.965 18:25:14 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:02.965 18:25:14 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:02.965 18:25:14 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:02.965 18:25:14 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:02.965 18:25:14 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:02.965 18:25:14 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:02.965 18:25:14 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:02.965 18:25:14 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:02.965 18:25:14 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:02.965 18:25:14 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:02.966 18:25:14 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:02.966 18:25:14 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:02.966 18:25:14 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:02.966 18:25:14 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:02.966 18:25:14 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:02.966 18:25:14 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:02.966 18:25:14 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:02.966 18:25:14 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.966 18:25:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.966 ************************************ 00:18:02.966 START TEST test_save_ublk_config 00:18:02.966 ************************************ 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=77761 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 77761 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77761 ']' 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.966 18:25:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:03.225 [2024-07-22 18:25:15.099025] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:03.225 [2024-07-22 18:25:15.099217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77761 ] 00:18:03.487 [2024-07-22 18:25:15.278174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.746 [2024-07-22 18:25:15.560583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:04.682 [2024-07-22 18:25:16.353717] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:04.682 [2024-07-22 18:25:16.354893] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:04.682 malloc0 00:18:04.682 [2024-07-22 18:25:16.442865] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:04.682 [2024-07-22 18:25:16.442979] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:04.682 [2024-07-22 18:25:16.442994] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:04.682 [2024-07-22 18:25:16.443006] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:04.682 [2024-07-22 18:25:16.451705] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:04.682 [2024-07-22 18:25:16.451737] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:04.682 [2024-07-22 18:25:16.456700] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:04.682 [2024-07-22 18:25:16.456846] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:04.682 [2024-07-22 18:25:16.473718] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:04.682 0 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.682 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:04.943 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.943 18:25:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:04.943 "subsystems": [ 00:18:04.943 { 00:18:04.943 "subsystem": "keyring", 00:18:04.943 "config": [] 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "subsystem": "iobuf", 00:18:04.943 "config": [ 00:18:04.943 { 00:18:04.943 "method": "iobuf_set_options", 00:18:04.943 "params": { 00:18:04.943 "small_pool_count": 8192, 00:18:04.943 "large_pool_count": 1024, 00:18:04.943 "small_bufsize": 8192, 00:18:04.943 "large_bufsize": 135168 00:18:04.943 } 00:18:04.943 } 00:18:04.943 ] 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "subsystem": "sock", 00:18:04.943 "config": [ 00:18:04.943 { 00:18:04.943 "method": "sock_set_default_impl", 00:18:04.943 "params": { 00:18:04.943 "impl_name": "posix" 00:18:04.943 } 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "method": "sock_impl_set_options", 00:18:04.943 "params": { 00:18:04.943 "impl_name": "ssl", 00:18:04.943 "recv_buf_size": 4096, 00:18:04.943 "send_buf_size": 4096, 00:18:04.943 "enable_recv_pipe": true, 00:18:04.943 "enable_quickack": false, 00:18:04.943 "enable_placement_id": 0, 00:18:04.943 "enable_zerocopy_send_server": true, 00:18:04.943 "enable_zerocopy_send_client": false, 00:18:04.943 "zerocopy_threshold": 0, 00:18:04.943 "tls_version": 0, 00:18:04.943 "enable_ktls": false 00:18:04.943 } 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "method": "sock_impl_set_options", 00:18:04.943 "params": { 00:18:04.943 "impl_name": "posix", 00:18:04.943 "recv_buf_size": 2097152, 00:18:04.943 "send_buf_size": 2097152, 00:18:04.943 "enable_recv_pipe": true, 00:18:04.943 "enable_quickack": false, 00:18:04.943 "enable_placement_id": 0, 00:18:04.943 "enable_zerocopy_send_server": true, 00:18:04.943 "enable_zerocopy_send_client": false, 00:18:04.943 "zerocopy_threshold": 0, 00:18:04.943 "tls_version": 0, 00:18:04.943 "enable_ktls": false 00:18:04.943 } 00:18:04.943 } 00:18:04.943 ] 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "subsystem": "vmd", 00:18:04.943 "config": [] 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "subsystem": "accel", 00:18:04.943 "config": [ 00:18:04.943 { 00:18:04.943 "method": "accel_set_options", 00:18:04.943 "params": { 00:18:04.943 "small_cache_size": 128, 00:18:04.943 "large_cache_size": 16, 00:18:04.943 "task_count": 2048, 00:18:04.943 "sequence_count": 2048, 00:18:04.943 "buf_count": 2048 00:18:04.943 } 00:18:04.943 } 00:18:04.943 ] 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "subsystem": "bdev", 00:18:04.943 "config": [ 00:18:04.943 { 00:18:04.943 "method": "bdev_set_options", 00:18:04.943 "params": { 00:18:04.943 "bdev_io_pool_size": 65535, 00:18:04.943 "bdev_io_cache_size": 256, 00:18:04.943 "bdev_auto_examine": true, 00:18:04.943 "iobuf_small_cache_size": 128, 00:18:04.943 "iobuf_large_cache_size": 16 00:18:04.943 } 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "method": "bdev_raid_set_options", 00:18:04.943 "params": { 00:18:04.943 "process_window_size_kb": 1024, 00:18:04.943 "process_max_bandwidth_mb_sec": 0 00:18:04.943 } 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "method": "bdev_iscsi_set_options", 00:18:04.943 "params": { 00:18:04.943 "timeout_sec": 30 00:18:04.943 } 00:18:04.943 }, 00:18:04.943 { 00:18:04.943 "method": "bdev_nvme_set_options", 00:18:04.943 "params": { 00:18:04.943 "action_on_timeout": "none", 00:18:04.943 "timeout_us": 0, 00:18:04.943 "timeout_admin_us": 0, 00:18:04.943 "keep_alive_timeout_ms": 10000, 00:18:04.943 "arbitration_burst": 0, 00:18:04.943 "low_priority_weight": 0, 00:18:04.943 "medium_priority_weight": 0, 00:18:04.943 "high_priority_weight": 0, 00:18:04.943 "nvme_adminq_poll_period_us": 10000, 00:18:04.943 "nvme_ioq_poll_period_us": 0, 00:18:04.943 "io_queue_requests": 0, 00:18:04.943 "delay_cmd_submit": true, 00:18:04.943 "transport_retry_count": 4, 00:18:04.943 "bdev_retry_count": 3, 00:18:04.943 "transport_ack_timeout": 0, 00:18:04.943 "ctrlr_loss_timeout_sec": 0, 00:18:04.943 "reconnect_delay_sec": 0, 00:18:04.943 "fast_io_fail_timeout_sec": 0, 00:18:04.943 "disable_auto_failback": false, 00:18:04.943 "generate_uuids": false, 00:18:04.943 "transport_tos": 0, 00:18:04.943 "nvme_error_stat": false, 00:18:04.943 "rdma_srq_size": 0, 00:18:04.943 "io_path_stat": false, 00:18:04.943 "allow_accel_sequence": false, 00:18:04.943 "rdma_max_cq_size": 0, 00:18:04.943 "rdma_cm_event_timeout_ms": 0, 00:18:04.943 "dhchap_digests": [ 00:18:04.943 "sha256", 00:18:04.943 "sha384", 00:18:04.943 "sha512" 00:18:04.943 ], 00:18:04.943 "dhchap_dhgroups": [ 00:18:04.943 "null", 00:18:04.943 "ffdhe2048", 00:18:04.943 "ffdhe3072", 00:18:04.944 "ffdhe4096", 00:18:04.944 "ffdhe6144", 00:18:04.944 "ffdhe8192" 00:18:04.944 ] 00:18:04.944 } 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "method": "bdev_nvme_set_hotplug", 00:18:04.944 "params": { 00:18:04.944 "period_us": 100000, 00:18:04.944 "enable": false 00:18:04.944 } 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "method": "bdev_malloc_create", 00:18:04.944 "params": { 00:18:04.944 "name": "malloc0", 00:18:04.944 "num_blocks": 8192, 00:18:04.944 "block_size": 4096, 00:18:04.944 "physical_block_size": 4096, 00:18:04.944 "uuid": "4a3446ba-ac44-4e9d-bbdf-1a957baf913e", 00:18:04.944 "optimal_io_boundary": 0, 00:18:04.944 "md_size": 0, 00:18:04.944 "dif_type": 0, 00:18:04.944 "dif_is_head_of_md": false, 00:18:04.944 "dif_pi_format": 0 00:18:04.944 } 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "method": "bdev_wait_for_examine" 00:18:04.944 } 00:18:04.944 ] 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "scsi", 00:18:04.944 "config": null 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "scheduler", 00:18:04.944 "config": [ 00:18:04.944 { 00:18:04.944 "method": "framework_set_scheduler", 00:18:04.944 "params": { 00:18:04.944 "name": "static" 00:18:04.944 } 00:18:04.944 } 00:18:04.944 ] 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "vhost_scsi", 00:18:04.944 "config": [] 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "vhost_blk", 00:18:04.944 "config": [] 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "ublk", 00:18:04.944 "config": [ 00:18:04.944 { 00:18:04.944 "method": "ublk_create_target", 00:18:04.944 "params": { 00:18:04.944 "cpumask": "1" 00:18:04.944 } 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "method": "ublk_start_disk", 00:18:04.944 "params": { 00:18:04.944 "bdev_name": "malloc0", 00:18:04.944 "ublk_id": 0, 00:18:04.944 "num_queues": 1, 00:18:04.944 "queue_depth": 128 00:18:04.944 } 00:18:04.944 } 00:18:04.944 ] 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "nbd", 00:18:04.944 "config": [] 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "nvmf", 00:18:04.944 "config": [ 00:18:04.944 { 00:18:04.944 "method": "nvmf_set_config", 00:18:04.944 "params": { 00:18:04.944 "discovery_filter": "match_any", 00:18:04.944 "admin_cmd_passthru": { 00:18:04.944 "identify_ctrlr": false 00:18:04.944 } 00:18:04.944 } 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "method": "nvmf_set_max_subsystems", 00:18:04.944 "params": { 00:18:04.944 "max_subsystems": 1024 00:18:04.944 } 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "method": "nvmf_set_crdt", 00:18:04.944 "params": { 00:18:04.944 "crdt1": 0, 00:18:04.944 "crdt2": 0, 00:18:04.944 "crdt3": 0 00:18:04.944 } 00:18:04.944 } 00:18:04.944 ] 00:18:04.944 }, 00:18:04.944 { 00:18:04.944 "subsystem": "iscsi", 00:18:04.944 "config": [ 00:18:04.944 { 00:18:04.944 "method": "iscsi_set_options", 00:18:04.944 "params": { 00:18:04.944 "node_base": "iqn.2016-06.io.spdk", 00:18:04.944 "max_sessions": 128, 00:18:04.944 "max_connections_per_session": 2, 00:18:04.944 "max_queue_depth": 64, 00:18:04.944 "default_time2wait": 2, 00:18:04.944 "default_time2retain": 20, 00:18:04.944 "first_burst_length": 8192, 00:18:04.944 "immediate_data": true, 00:18:04.944 "allow_duplicated_isid": false, 00:18:04.944 "error_recovery_level": 0, 00:18:04.944 "nop_timeout": 60, 00:18:04.944 "nop_in_interval": 30, 00:18:04.944 "disable_chap": false, 00:18:04.944 "require_chap": false, 00:18:04.944 "mutual_chap": false, 00:18:04.944 "chap_group": 0, 00:18:04.944 "max_large_datain_per_connection": 64, 00:18:04.944 "max_r2t_per_connection": 4, 00:18:04.944 "pdu_pool_size": 36864, 00:18:04.944 "immediate_data_pool_size": 16384, 00:18:04.944 "data_out_pool_size": 2048 00:18:04.944 } 00:18:04.944 } 00:18:04.944 ] 00:18:04.944 } 00:18:04.944 ] 00:18:04.944 }' 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 77761 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77761 ']' 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77761 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77761 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77761' 00:18:04.944 killing process with pid 77761 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77761 00:18:04.944 18:25:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77761 00:18:06.346 [2024-07-22 18:25:18.123577] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:06.346 [2024-07-22 18:25:18.163752] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:06.346 [2024-07-22 18:25:18.163951] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:06.346 [2024-07-22 18:25:18.171716] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:06.346 [2024-07-22 18:25:18.171776] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:06.346 [2024-07-22 18:25:18.171788] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:06.346 [2024-07-22 18:25:18.171821] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:06.346 [2024-07-22 18:25:18.172017] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=77825 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 77825 00:18:07.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77825 ']' 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:07.767 18:25:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:07.767 "subsystems": [ 00:18:07.767 { 00:18:07.767 "subsystem": "keyring", 00:18:07.767 "config": [] 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "subsystem": "iobuf", 00:18:07.767 "config": [ 00:18:07.767 { 00:18:07.767 "method": "iobuf_set_options", 00:18:07.767 "params": { 00:18:07.767 "small_pool_count": 8192, 00:18:07.767 "large_pool_count": 1024, 00:18:07.767 "small_bufsize": 8192, 00:18:07.767 "large_bufsize": 135168 00:18:07.767 } 00:18:07.767 } 00:18:07.767 ] 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "subsystem": "sock", 00:18:07.767 "config": [ 00:18:07.767 { 00:18:07.767 "method": "sock_set_default_impl", 00:18:07.767 "params": { 00:18:07.767 "impl_name": "posix" 00:18:07.767 } 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "method": "sock_impl_set_options", 00:18:07.767 "params": { 00:18:07.767 "impl_name": "ssl", 00:18:07.767 "recv_buf_size": 4096, 00:18:07.767 "send_buf_size": 4096, 00:18:07.767 "enable_recv_pipe": true, 00:18:07.767 "enable_quickack": false, 00:18:07.767 "enable_placement_id": 0, 00:18:07.767 "enable_zerocopy_send_server": true, 00:18:07.767 "enable_zerocopy_send_client": false, 00:18:07.767 "zerocopy_threshold": 0, 00:18:07.767 "tls_version": 0, 00:18:07.767 "enable_ktls": false 00:18:07.767 } 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "method": "sock_impl_set_options", 00:18:07.767 "params": { 00:18:07.767 "impl_name": "posix", 00:18:07.767 "recv_buf_size": 2097152, 00:18:07.767 "send_buf_size": 2097152, 00:18:07.767 "enable_recv_pipe": true, 00:18:07.767 "enable_quickack": false, 00:18:07.767 "enable_placement_id": 0, 00:18:07.767 "enable_zerocopy_send_server": true, 00:18:07.767 "enable_zerocopy_send_client": false, 00:18:07.767 "zerocopy_threshold": 0, 00:18:07.767 "tls_version": 0, 00:18:07.767 "enable_ktls": false 00:18:07.767 } 00:18:07.767 } 00:18:07.767 ] 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "subsystem": "vmd", 00:18:07.767 "config": [] 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "subsystem": "accel", 00:18:07.767 "config": [ 00:18:07.767 { 00:18:07.767 "method": "accel_set_options", 00:18:07.767 "params": { 00:18:07.767 "small_cache_size": 128, 00:18:07.767 "large_cache_size": 16, 00:18:07.767 "task_count": 2048, 00:18:07.767 "sequence_count": 2048, 00:18:07.767 "buf_count": 2048 00:18:07.767 } 00:18:07.767 } 00:18:07.767 ] 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "subsystem": "bdev", 00:18:07.767 "config": [ 00:18:07.767 { 00:18:07.767 "method": "bdev_set_options", 00:18:07.767 "params": { 00:18:07.767 "bdev_io_pool_size": 65535, 00:18:07.767 "bdev_io_cache_size": 256, 00:18:07.767 "bdev_auto_examine": true, 00:18:07.767 "iobuf_small_cache_size": 128, 00:18:07.767 "iobuf_large_cache_size": 16 00:18:07.767 } 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "method": "bdev_raid_set_options", 00:18:07.767 "params": { 00:18:07.767 "process_window_size_kb": 1024, 00:18:07.767 "process_max_bandwidth_mb_sec": 0 00:18:07.767 } 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "method": "bdev_iscsi_set_options", 00:18:07.767 "params": { 00:18:07.767 "timeout_sec": 30 00:18:07.767 } 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "method": "bdev_nvme_set_options", 00:18:07.767 "params": { 00:18:07.767 "action_on_timeout": "none", 00:18:07.767 "timeout_us": 0, 00:18:07.767 "timeout_admin_us": 0, 00:18:07.767 "keep_alive_timeout_ms": 10000, 00:18:07.767 "arbitration_burst": 0, 00:18:07.767 "low_priority_weight": 0, 00:18:07.767 "medium_priority_weight": 0, 00:18:07.767 "high_priority_weight": 0, 00:18:07.767 "nvme_adminq_poll_period_us": 10000, 00:18:07.767 "nvme_ioq_poll_period_us": 0, 00:18:07.767 "io_queue_requests": 0, 00:18:07.767 "delay_cmd_submit": true, 00:18:07.767 "transport_retry_count": 4, 00:18:07.767 "bdev_retry_count": 3, 00:18:07.767 "transport_ack_timeout": 0, 00:18:07.767 "ctrlr_loss_timeout_sec": 0, 00:18:07.767 "reconnect_delay_sec": 0, 00:18:07.767 "fast_io_fail_timeout_sec": 0, 00:18:07.767 "disable_auto_failback": false, 00:18:07.767 "generate_uuids": false, 00:18:07.767 "transport_tos": 0, 00:18:07.767 "nvme_error_stat": false, 00:18:07.767 "rdma_srq_size": 0, 00:18:07.767 "io_path_stat": false, 00:18:07.767 "allow_accel_sequence": false, 00:18:07.767 "rdma_max_cq_size": 0, 00:18:07.767 "rdma_cm_event_timeout_ms": 0, 00:18:07.767 "dhchap_digests": [ 00:18:07.767 "sha256", 00:18:07.767 "sha384", 00:18:07.767 "sha512" 00:18:07.767 ], 00:18:07.767 "dhchap_dhgroups": [ 00:18:07.767 "null", 00:18:07.767 "ffdhe2048", 00:18:07.767 "ffdhe3072", 00:18:07.767 "ffdhe4096", 00:18:07.767 "ffdhe6144", 00:18:07.767 "ffdhe8192" 00:18:07.767 ] 00:18:07.767 } 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "method": "bdev_nvme_set_hotplug", 00:18:07.767 "params": { 00:18:07.767 "period_us": 100000, 00:18:07.767 "enable": false 00:18:07.767 } 00:18:07.767 }, 00:18:07.767 { 00:18:07.767 "method": "bdev_malloc_create", 00:18:07.767 "params": { 00:18:07.767 "name": "malloc0", 00:18:07.767 "num_blocks": 8192, 00:18:07.767 "block_size": 4096, 00:18:07.767 "physical_block_size": 4096, 00:18:07.767 "uuid": "4a3446ba-ac44-4e9d-bbdf-1a957baf913e", 00:18:07.767 "optimal_io_boundary": 0, 00:18:07.768 "md_size": 0, 00:18:07.768 "dif_type": 0, 00:18:07.768 "dif_is_head_of_md": false, 00:18:07.768 "dif_pi_format": 0 00:18:07.768 } 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "method": "bdev_wait_for_examine" 00:18:07.768 } 00:18:07.768 ] 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "scsi", 00:18:07.768 "config": null 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "scheduler", 00:18:07.768 "config": [ 00:18:07.768 { 00:18:07.768 "method": "framework_set_scheduler", 00:18:07.768 "params": { 00:18:07.768 "name": "static" 00:18:07.768 } 00:18:07.768 } 00:18:07.768 ] 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "vhost_scsi", 00:18:07.768 "config": [] 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "vhost_blk", 00:18:07.768 "config": [] 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "ublk", 00:18:07.768 "config": [ 00:18:07.768 { 00:18:07.768 "method": "ublk_create_target", 00:18:07.768 "params": { 00:18:07.768 "cpumask": "1" 00:18:07.768 } 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "method": "ublk_start_disk", 00:18:07.768 "params": { 00:18:07.768 "bdev_name": "malloc0", 00:18:07.768 "ublk_id": 0, 00:18:07.768 "num_queues": 1, 00:18:07.768 "queue_depth": 128 00:18:07.768 } 00:18:07.768 } 00:18:07.768 ] 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "nbd", 00:18:07.768 "config": [] 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "nvmf", 00:18:07.768 "config": [ 00:18:07.768 { 00:18:07.768 "method": "nvmf_set_config", 00:18:07.768 "params": { 00:18:07.768 "discovery_filter": "match_any", 00:18:07.768 "admin_cmd_passthru": { 00:18:07.768 "identify_ctrlr": false 00:18:07.768 } 00:18:07.768 } 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "method": "nvmf_set_max_subsystems", 00:18:07.768 "params": { 00:18:07.768 "max_subsystems": 1024 00:18:07.768 } 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "method": "nvmf_set_crdt", 00:18:07.768 "params": { 00:18:07.768 "crdt1": 0, 00:18:07.768 "crdt2": 0, 00:18:07.768 "crdt3": 0 00:18:07.768 } 00:18:07.768 } 00:18:07.768 ] 00:18:07.768 }, 00:18:07.768 { 00:18:07.768 "subsystem": "iscsi", 00:18:07.768 "config": [ 00:18:07.768 { 00:18:07.768 "method": "iscsi_set_options", 00:18:07.768 "params": { 00:18:07.768 "node_base": "iqn.2016-06.io.spdk", 00:18:07.768 "max_sessions": 128, 00:18:07.768 "max_connections_per_session": 2, 00:18:07.768 "max_queue_depth": 64, 00:18:07.768 "default_time2wait": 2, 00:18:07.768 "default_time2retain": 20, 00:18:07.768 "first_burst_length": 8192, 00:18:07.768 "immediate_data": true, 00:18:07.768 "allow_duplicated_isid": false, 00:18:07.768 "error_recovery_level": 0, 00:18:07.768 "nop_timeout": 60, 00:18:07.768 "nop_in_interval": 30, 00:18:07.768 "disable_chap": false, 00:18:07.768 "require_chap": false, 00:18:07.768 "mutual_chap": false, 00:18:07.768 "chap_group": 0, 00:18:07.768 "max_large_datain_per_connection": 64, 00:18:07.768 "max_r2t_per_connection": 4, 00:18:07.768 "pdu_pool_size": 36864, 00:18:07.768 "immediate_data_pool_size": 16384, 00:18:07.768 "data_out_pool_size": 2048 00:18:07.768 } 00:18:07.768 } 00:18:07.768 ] 00:18:07.768 } 00:18:07.768 ] 00:18:07.768 }' 00:18:07.768 [2024-07-22 18:25:19.564527] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:07.768 [2024-07-22 18:25:19.564735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77825 ] 00:18:07.768 [2024-07-22 18:25:19.740221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.027 [2024-07-22 18:25:19.979565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.963 [2024-07-22 18:25:20.911751] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:08.963 [2024-07-22 18:25:20.913001] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:08.963 [2024-07-22 18:25:20.918876] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:08.963 [2024-07-22 18:25:20.918983] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:08.963 [2024-07-22 18:25:20.918998] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:08.963 [2024-07-22 18:25:20.919007] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:08.963 [2024-07-22 18:25:20.925755] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:08.963 [2024-07-22 18:25:20.925781] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:08.963 [2024-07-22 18:25:20.936751] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:08.963 [2024-07-22 18:25:20.936933] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:08.963 [2024-07-22 18:25:20.953764] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:09.226 18:25:20 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.227 18:25:20 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:18:09.227 18:25:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:09.227 18:25:20 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.227 18:25:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:09.227 18:25:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 77825 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77825 ']' 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77825 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77825 00:18:09.227 killing process with pid 77825 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77825' 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77825 00:18:09.227 18:25:21 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77825 00:18:10.604 [2024-07-22 18:25:22.461914] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:10.604 [2024-07-22 18:25:22.500716] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:10.604 [2024-07-22 18:25:22.522806] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:10.604 [2024-07-22 18:25:22.531722] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:10.604 [2024-07-22 18:25:22.531810] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:10.604 [2024-07-22 18:25:22.531824] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:10.604 [2024-07-22 18:25:22.531858] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:10.604 [2024-07-22 18:25:22.532049] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:11.981 18:25:23 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:11.981 00:18:11.981 real 0m8.839s 00:18:11.981 user 0m7.460s 00:18:11.981 sys 0m2.172s 00:18:11.981 18:25:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.981 18:25:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:11.981 ************************************ 00:18:11.981 END TEST test_save_ublk_config 00:18:11.981 ************************************ 00:18:11.981 18:25:23 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:11.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.981 18:25:23 ublk -- ublk/ublk.sh@139 -- # spdk_pid=77898 00:18:11.981 18:25:23 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.981 18:25:23 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:11.981 18:25:23 ublk -- ublk/ublk.sh@141 -- # waitforlisten 77898 00:18:11.981 18:25:23 ublk -- common/autotest_common.sh@829 -- # '[' -z 77898 ']' 00:18:11.981 18:25:23 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.981 18:25:23 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.981 18:25:23 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.981 18:25:23 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.981 18:25:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.981 [2024-07-22 18:25:23.957122] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:11.981 [2024-07-22 18:25:23.957318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77898 ] 00:18:12.240 [2024-07-22 18:25:24.150331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:12.499 [2024-07-22 18:25:24.387948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.499 [2024-07-22 18:25:24.387960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.438 18:25:25 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.438 18:25:25 ublk -- common/autotest_common.sh@862 -- # return 0 00:18:13.438 18:25:25 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:13.438 18:25:25 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:13.438 18:25:25 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.438 18:25:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.438 ************************************ 00:18:13.438 START TEST test_create_ublk 00:18:13.438 ************************************ 00:18:13.438 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:18:13.438 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:13.438 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.438 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.438 [2024-07-22 18:25:25.182792] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:13.438 [2024-07-22 18:25:25.185671] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:13.438 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.438 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:13.438 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:13.438 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.438 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.438 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.707 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:13.707 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:13.707 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.707 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.707 [2024-07-22 18:25:25.454894] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:13.707 [2024-07-22 18:25:25.455462] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:13.707 [2024-07-22 18:25:25.455491] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:13.707 [2024-07-22 18:25:25.455505] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:13.707 [2024-07-22 18:25:25.463096] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:13.707 [2024-07-22 18:25:25.463130] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:13.707 [2024-07-22 18:25:25.470722] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:13.707 [2024-07-22 18:25:25.482945] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:13.708 [2024-07-22 18:25:25.498153] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:13.708 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:13.708 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.708 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.708 18:25:25 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:13.708 { 00:18:13.708 "ublk_device": "/dev/ublkb0", 00:18:13.708 "id": 0, 00:18:13.708 "queue_depth": 512, 00:18:13.708 "num_queues": 4, 00:18:13.708 "bdev_name": "Malloc0" 00:18:13.708 } 00:18:13.708 ]' 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:13.708 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:13.965 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:13.965 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:13.965 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:13.965 18:25:25 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:13.965 18:25:25 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:13.965 fio: verification read phase will never start because write phase uses all of runtime 00:18:13.965 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:13.965 fio-3.35 00:18:13.965 Starting 1 process 00:18:26.187 00:18:26.187 fio_test: (groupid=0, jobs=1): err= 0: pid=77948: Mon Jul 22 18:25:36 2024 00:18:26.187 write: IOPS=12.1k, BW=47.3MiB/s (49.6MB/s)(474MiB/10001msec); 0 zone resets 00:18:26.187 clat (usec): min=50, max=4000, avg=81.15, stdev=123.06 00:18:26.187 lat (usec): min=51, max=4000, avg=81.84, stdev=123.07 00:18:26.187 clat percentiles (usec): 00:18:26.187 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 69], 20.00th=[ 70], 00:18:26.187 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 00:18:26.187 | 70.00th=[ 77], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 94], 00:18:26.187 | 99.00th=[ 115], 99.50th=[ 127], 99.90th=[ 2606], 99.95th=[ 3163], 00:18:26.187 | 99.99th=[ 3589] 00:18:26.187 bw ( KiB/s): min=43336, max=54192, per=99.98%, avg=48474.11, stdev=2672.54, samples=19 00:18:26.187 iops : min=10834, max=13548, avg=12118.53, stdev=668.13, samples=19 00:18:26.187 lat (usec) : 100=96.85%, 250=2.84%, 500=0.01%, 750=0.02%, 1000=0.02% 00:18:26.187 lat (msec) : 2=0.11%, 4=0.16%, 10=0.01% 00:18:26.187 cpu : usr=2.91%, sys=8.25%, ctx=121236, majf=0, minf=795 00:18:26.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.187 issued rwts: total=0,121227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.187 00:18:26.187 Run status group 0 (all jobs): 00:18:26.187 WRITE: bw=47.3MiB/s (49.6MB/s), 47.3MiB/s-47.3MiB/s (49.6MB/s-49.6MB/s), io=474MiB (497MB), run=10001-10001msec 00:18:26.187 00:18:26.187 Disk stats (read/write): 00:18:26.187 ublkb0: ios=0/119947, merge=0/0, ticks=0/8883, in_queue=8884, util=99.10% 00:18:26.187 18:25:36 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 [2024-07-22 18:25:36.032553] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:26.187 [2024-07-22 18:25:36.077299] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:26.187 [2024-07-22 18:25:36.078837] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:26.187 [2024-07-22 18:25:36.084709] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:26.187 [2024-07-22 18:25:36.085058] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:26.187 [2024-07-22 18:25:36.085082] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 [2024-07-22 18:25:36.092817] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:26.187 request: 00:18:26.187 { 00:18:26.187 "ublk_id": 0, 00:18:26.187 "method": "ublk_stop_disk", 00:18:26.187 "req_id": 1 00:18:26.187 } 00:18:26.187 Got JSON-RPC error response 00:18:26.187 response: 00:18:26.187 { 00:18:26.187 "code": -19, 00:18:26.187 "message": "No such device" 00:18:26.187 } 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:26.187 18:25:36 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 [2024-07-22 18:25:36.108804] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:26.187 [2024-07-22 18:25:36.116701] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:26.187 [2024-07-22 18:25:36.116750] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:26.187 ************************************ 00:18:26.187 END TEST test_create_ublk 00:18:26.187 ************************************ 00:18:26.187 18:25:36 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:26.187 00:18:26.187 real 0m11.400s 00:18:26.187 user 0m0.753s 00:18:26.187 sys 0m0.925s 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 18:25:36 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:26.187 18:25:36 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:26.187 18:25:36 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:26.187 18:25:36 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.187 18:25:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 ************************************ 00:18:26.187 START TEST test_create_multi_ublk 00:18:26.187 ************************************ 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 [2024-07-22 18:25:36.634714] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:26.187 [2024-07-22 18:25:36.637435] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.187 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 [2024-07-22 18:25:36.906874] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:26.187 [2024-07-22 18:25:36.907376] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:26.188 [2024-07-22 18:25:36.907411] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:26.188 [2024-07-22 18:25:36.907421] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:26.188 [2024-07-22 18:25:36.915072] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:26.188 [2024-07-22 18:25:36.915098] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:26.188 [2024-07-22 18:25:36.922739] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:26.188 [2024-07-22 18:25:36.923505] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:26.188 [2024-07-22 18:25:36.933808] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:26.188 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:26.188 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.188 18:25:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:26.188 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.188 18:25:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.188 [2024-07-22 18:25:37.213861] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:26.188 [2024-07-22 18:25:37.214359] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:26.188 [2024-07-22 18:25:37.214382] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:26.188 [2024-07-22 18:25:37.214395] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:26.188 [2024-07-22 18:25:37.222075] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:26.188 [2024-07-22 18:25:37.222106] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:26.188 [2024-07-22 18:25:37.229720] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:26.188 [2024-07-22 18:25:37.230470] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:26.188 [2024-07-22 18:25:37.238742] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.188 [2024-07-22 18:25:37.517869] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:26.188 [2024-07-22 18:25:37.518368] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:26.188 [2024-07-22 18:25:37.518397] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:26.188 [2024-07-22 18:25:37.518408] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:26.188 [2024-07-22 18:25:37.526106] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:26.188 [2024-07-22 18:25:37.526133] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:26.188 [2024-07-22 18:25:37.533730] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:26.188 [2024-07-22 18:25:37.534513] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:26.188 [2024-07-22 18:25:37.542754] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.188 [2024-07-22 18:25:37.837860] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:26.188 [2024-07-22 18:25:37.838367] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:26.188 [2024-07-22 18:25:37.838390] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:26.188 [2024-07-22 18:25:37.838404] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:26.188 [2024-07-22 18:25:37.845738] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:26.188 [2024-07-22 18:25:37.845772] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:26.188 [2024-07-22 18:25:37.853722] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:26.188 [2024-07-22 18:25:37.854477] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:26.188 [2024-07-22 18:25:37.860319] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:26.188 { 00:18:26.188 "ublk_device": "/dev/ublkb0", 00:18:26.188 "id": 0, 00:18:26.188 "queue_depth": 512, 00:18:26.188 "num_queues": 4, 00:18:26.188 "bdev_name": "Malloc0" 00:18:26.188 }, 00:18:26.188 { 00:18:26.188 "ublk_device": "/dev/ublkb1", 00:18:26.188 "id": 1, 00:18:26.188 "queue_depth": 512, 00:18:26.188 "num_queues": 4, 00:18:26.188 "bdev_name": "Malloc1" 00:18:26.188 }, 00:18:26.188 { 00:18:26.188 "ublk_device": "/dev/ublkb2", 00:18:26.188 "id": 2, 00:18:26.188 "queue_depth": 512, 00:18:26.188 "num_queues": 4, 00:18:26.188 "bdev_name": "Malloc2" 00:18:26.188 }, 00:18:26.188 { 00:18:26.188 "ublk_device": "/dev/ublkb3", 00:18:26.188 "id": 3, 00:18:26.188 "queue_depth": 512, 00:18:26.188 "num_queues": 4, 00:18:26.188 "bdev_name": "Malloc3" 00:18:26.188 } 00:18:26.188 ]' 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:26.188 18:25:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:26.188 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:26.446 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:26.704 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.962 [2024-07-22 18:25:38.902977] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:26.962 [2024-07-22 18:25:38.939750] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:26.962 [2024-07-22 18:25:38.941099] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:26.962 [2024-07-22 18:25:38.950746] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:26.962 [2024-07-22 18:25:38.951135] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:26.962 [2024-07-22 18:25:38.951159] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.962 18:25:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.962 [2024-07-22 18:25:38.958807] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:27.221 [2024-07-22 18:25:38.988293] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:27.221 [2024-07-22 18:25:38.989825] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:27.221 [2024-07-22 18:25:38.998751] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:27.221 [2024-07-22 18:25:38.999089] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:27.221 [2024-07-22 18:25:38.999111] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.221 [2024-07-22 18:25:39.009892] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:27.221 [2024-07-22 18:25:39.043257] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:27.221 [2024-07-22 18:25:39.044776] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:27.221 [2024-07-22 18:25:39.049718] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:27.221 [2024-07-22 18:25:39.050060] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:27.221 [2024-07-22 18:25:39.050081] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.221 [2024-07-22 18:25:39.063891] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:27.221 [2024-07-22 18:25:39.101297] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:27.221 [2024-07-22 18:25:39.104050] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:27.221 [2024-07-22 18:25:39.110708] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:27.221 [2024-07-22 18:25:39.111055] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:27.221 [2024-07-22 18:25:39.111076] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.221 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:27.479 [2024-07-22 18:25:39.326843] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:27.479 [2024-07-22 18:25:39.332831] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:27.479 [2024-07-22 18:25:39.332885] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:27.479 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:27.479 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.479 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:27.479 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.479 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.738 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.738 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.738 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:27.738 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.738 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.996 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.996 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.996 18:25:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:27.996 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.996 18:25:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.587 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.587 18:25:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.587 18:25:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:28.587 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.587 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:28.845 ************************************ 00:18:28.845 END TEST test_create_multi_ublk 00:18:28.845 ************************************ 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:28.845 00:18:28.845 real 0m4.143s 00:18:28.845 user 0m1.229s 00:18:28.845 sys 0m0.171s 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.845 18:25:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:28.845 18:25:40 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:28.845 18:25:40 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:28.845 18:25:40 ublk -- ublk/ublk.sh@130 -- # killprocess 77898 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@948 -- # '[' -z 77898 ']' 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@952 -- # kill -0 77898 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@953 -- # uname 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77898 00:18:28.845 killing process with pid 77898 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77898' 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@967 -- # kill 77898 00:18:28.845 18:25:40 ublk -- common/autotest_common.sh@972 -- # wait 77898 00:18:30.222 [2024-07-22 18:25:41.850928] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:30.222 [2024-07-22 18:25:41.850994] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:31.156 00:18:31.156 real 0m28.170s 00:18:31.156 user 0m42.067s 00:18:31.156 sys 0m8.499s 00:18:31.156 18:25:43 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.156 18:25:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.156 ************************************ 00:18:31.156 END TEST ublk 00:18:31.156 ************************************ 00:18:31.156 18:25:43 -- common/autotest_common.sh@1142 -- # return 0 00:18:31.156 18:25:43 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:31.156 18:25:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:31.156 18:25:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.156 18:25:43 -- common/autotest_common.sh@10 -- # set +x 00:18:31.156 ************************************ 00:18:31.156 START TEST ublk_recovery 00:18:31.156 ************************************ 00:18:31.156 18:25:43 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:31.156 * Looking for test storage... 00:18:31.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:31.156 18:25:43 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:31.156 18:25:43 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:31.156 18:25:43 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:31.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.156 18:25:43 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78286 00:18:31.156 18:25:43 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:31.156 18:25:43 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.156 18:25:43 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78286 00:18:31.156 18:25:43 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78286 ']' 00:18:31.156 18:25:43 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.156 18:25:43 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.156 18:25:43 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.156 18:25:43 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.156 18:25:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.414 [2024-07-22 18:25:43.250209] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:31.414 [2024-07-22 18:25:43.250365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78286 ] 00:18:31.414 [2024-07-22 18:25:43.412048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:31.673 [2024-07-22 18:25:43.644577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.673 [2024-07-22 18:25:43.644578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:32.609 18:25:44 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.609 [2024-07-22 18:25:44.431710] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:32.609 [2024-07-22 18:25:44.434485] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.609 18:25:44 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.609 malloc0 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.609 18:25:44 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.609 18:25:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.610 [2024-07-22 18:25:44.573916] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:32.610 [2024-07-22 18:25:44.574051] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:32.610 [2024-07-22 18:25:44.574067] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:32.610 [2024-07-22 18:25:44.574079] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:32.610 [2024-07-22 18:25:44.581873] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:32.610 [2024-07-22 18:25:44.581910] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:32.610 [2024-07-22 18:25:44.589707] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:32.610 [2024-07-22 18:25:44.589899] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:32.610 [2024-07-22 18:25:44.600722] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:32.610 1 00:18:32.610 18:25:44 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.610 18:25:44 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:34.012 18:25:45 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78328 00:18:34.012 18:25:45 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:34.012 18:25:45 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:34.012 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.012 fio-3.35 00:18:34.012 Starting 1 process 00:18:39.287 18:25:50 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78286 00:18:39.287 18:25:50 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:44.556 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78286 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:44.556 18:25:55 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=78428 00:18:44.556 18:25:55 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:44.556 18:25:55 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.556 18:25:55 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 78428 00:18:44.556 18:25:55 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78428 ']' 00:18:44.556 18:25:55 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.556 18:25:55 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.556 18:25:55 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.556 18:25:55 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.556 18:25:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.556 [2024-07-22 18:25:55.744482] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:44.556 [2024-07-22 18:25:55.744704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78428 ] 00:18:44.556 [2024-07-22 18:25:55.922964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.556 [2024-07-22 18:25:56.214734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.556 [2024-07-22 18:25:56.214750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.123 18:25:57 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.123 18:25:57 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:45.123 18:25:57 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:45.123 18:25:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.123 18:25:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.123 [2024-07-22 18:25:57.030723] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:45.123 [2024-07-22 18:25:57.033577] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:45.124 18:25:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.124 18:25:57 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:45.124 18:25:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.124 18:25:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.382 malloc0 00:18:45.382 18:25:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.382 18:25:57 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:45.382 18:25:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.383 18:25:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.383 [2024-07-22 18:25:57.185931] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:45.383 [2024-07-22 18:25:57.185995] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:45.383 [2024-07-22 18:25:57.186009] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:45.383 [2024-07-22 18:25:57.190777] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:45.383 [2024-07-22 18:25:57.190831] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:45.383 [2024-07-22 18:25:57.190943] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:45.383 1 00:18:45.383 18:25:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.383 18:25:57 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78328 00:19:11.978 [2024-07-22 18:26:20.796758] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:11.978 [2024-07-22 18:26:20.804211] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:11.978 [2024-07-22 18:26:20.810978] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:11.978 [2024-07-22 18:26:20.811018] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:33.926 00:19:33.926 fio_test: (groupid=0, jobs=1): err= 0: pid=78331: Mon Jul 22 18:26:45 2024 00:19:33.926 read: IOPS=9995, BW=39.0MiB/s (40.9MB/s)(2343MiB/60003msec) 00:19:33.926 slat (usec): min=2, max=689, avg= 6.35, stdev= 2.92 00:19:33.926 clat (usec): min=991, max=30209k, avg=6296.13, stdev=309545.73 00:19:33.926 lat (usec): min=997, max=30209k, avg=6302.49, stdev=309545.71 00:19:33.926 clat percentiles (msec): 00:19:33.926 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:19:33.926 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4], 00:19:33.926 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:19:33.926 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:19:33.926 | 99.99th=[17113] 00:19:33.926 bw ( KiB/s): min= 3488, max=86928, per=100.00%, avg=78771.42, stdev=12889.02, samples=60 00:19:33.926 iops : min= 872, max=21732, avg=19692.83, stdev=3222.25, samples=60 00:19:33.926 write: IOPS=9984, BW=39.0MiB/s (40.9MB/s)(2340MiB/60003msec); 0 zone resets 00:19:33.926 slat (usec): min=2, max=729, avg= 6.35, stdev= 3.04 00:19:33.926 clat (usec): min=989, max=30209k, avg=6502.30, stdev=314592.39 00:19:33.926 lat (usec): min=995, max=30209k, avg=6508.65, stdev=314592.38 00:19:33.926 clat percentiles (msec): 00:19:33.926 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:19:33.926 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:19:33.926 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:19:33.927 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 10], 00:19:33.927 | 99.99th=[17113] 00:19:33.927 bw ( KiB/s): min= 3704, max=86576, per=100.00%, avg=78697.97, stdev=12887.78, samples=60 00:19:33.927 iops : min= 926, max=21644, avg=19674.48, stdev=3221.94, samples=60 00:19:33.927 lat (usec) : 1000=0.01% 00:19:33.927 lat (msec) : 2=0.07%, 4=94.83%, 10=5.05%, 20=0.04%, >=2000=0.01% 00:19:33.927 cpu : usr=5.17%, sys=11.87%, ctx=38808, majf=0, minf=13 00:19:33.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:33.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:33.927 issued rwts: total=599756,599111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:33.927 00:19:33.927 Run status group 0 (all jobs): 00:19:33.927 READ: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=2343MiB (2457MB), run=60003-60003msec 00:19:33.927 WRITE: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=2340MiB (2454MB), run=60003-60003msec 00:19:33.927 00:19:33.927 Disk stats (read/write): 00:19:33.927 ublkb1: ios=597470/596801, merge=0/0, ticks=3717551/3771137, in_queue=7488689, util=99.94% 00:19:33.927 18:26:45 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.927 [2024-07-22 18:26:45.869855] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:33.927 [2024-07-22 18:26:45.901794] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:33.927 [2024-07-22 18:26:45.902123] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:33.927 [2024-07-22 18:26:45.909740] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:33.927 [2024-07-22 18:26:45.909869] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:33.927 [2024-07-22 18:26:45.909888] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.927 18:26:45 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.927 [2024-07-22 18:26:45.920882] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:19:33.927 [2024-07-22 18:26:45.928800] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:19:33.927 [2024-07-22 18:26:45.928860] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.927 18:26:45 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:33.927 18:26:45 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:33.927 18:26:45 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 78428 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 78428 ']' 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 78428 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.927 18:26:45 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78428 00:19:34.223 killing process with pid 78428 00:19:34.223 18:26:45 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.223 18:26:45 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.223 18:26:45 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78428' 00:19:34.223 18:26:45 ublk_recovery -- common/autotest_common.sh@967 -- # kill 78428 00:19:34.223 18:26:45 ublk_recovery -- common/autotest_common.sh@972 -- # wait 78428 00:19:35.158 [2024-07-22 18:26:47.014291] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:19:35.159 [2024-07-22 18:26:47.014388] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:19:36.535 00:19:36.535 real 1m5.339s 00:19:36.535 user 1m51.303s 00:19:36.536 sys 0m18.555s 00:19:36.536 18:26:48 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:36.536 18:26:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.536 ************************************ 00:19:36.536 END TEST ublk_recovery 00:19:36.536 ************************************ 00:19:36.536 18:26:48 -- common/autotest_common.sh@1142 -- # return 0 00:19:36.536 18:26:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:36.536 18:26:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.536 18:26:48 -- common/autotest_common.sh@10 -- # set +x 00:19:36.536 18:26:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:19:36.536 18:26:48 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:36.536 18:26:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:36.536 18:26:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.536 18:26:48 -- common/autotest_common.sh@10 -- # set +x 00:19:36.536 ************************************ 00:19:36.536 START TEST ftl 00:19:36.536 ************************************ 00:19:36.536 18:26:48 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:36.795 * Looking for test storage... 00:19:36.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.795 18:26:48 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:36.795 18:26:48 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:36.795 18:26:48 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.795 18:26:48 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.795 18:26:48 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:36.795 18:26:48 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:36.795 18:26:48 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.795 18:26:48 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:36.795 18:26:48 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:36.795 18:26:48 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.795 18:26:48 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.795 18:26:48 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:36.795 18:26:48 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:36.795 18:26:48 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:36.795 18:26:48 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:36.795 18:26:48 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:36.795 18:26:48 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:36.795 18:26:48 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.795 18:26:48 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.795 18:26:48 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:36.795 18:26:48 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:36.795 18:26:48 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:36.795 18:26:48 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:36.795 18:26:48 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:36.795 18:26:48 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:36.795 18:26:48 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:36.795 18:26:48 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:36.795 18:26:48 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.795 18:26:48 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.795 18:26:48 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.795 18:26:48 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:36.795 18:26:48 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:36.795 18:26:48 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:36.795 18:26:48 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:36.795 18:26:48 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:37.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:37.313 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.313 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.313 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.313 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.313 18:26:49 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79212 00:19:37.313 18:26:49 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79212 00:19:37.313 18:26:49 ftl -- common/autotest_common.sh@829 -- # '[' -z 79212 ']' 00:19:37.313 18:26:49 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.313 18:26:49 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.313 18:26:49 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.313 18:26:49 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.313 18:26:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:37.313 18:26:49 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:37.313 [2024-07-22 18:26:49.278065] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:37.313 [2024-07-22 18:26:49.278250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79212 ] 00:19:37.572 [2024-07-22 18:26:49.469864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.832 [2024-07-22 18:26:49.763046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.400 18:26:50 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.400 18:26:50 ftl -- common/autotest_common.sh@862 -- # return 0 00:19:38.400 18:26:50 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:38.659 18:26:50 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:39.599 18:26:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:39.599 18:26:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:40.166 18:26:52 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:40.166 18:26:52 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:40.166 18:26:52 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@50 -- # break 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:40.425 18:26:52 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:40.685 18:26:52 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:40.685 18:26:52 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:40.685 18:26:52 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:40.685 18:26:52 ftl -- ftl/ftl.sh@63 -- # break 00:19:40.685 18:26:52 ftl -- ftl/ftl.sh@66 -- # killprocess 79212 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@948 -- # '[' -z 79212 ']' 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@952 -- # kill -0 79212 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@953 -- # uname 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79212 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.685 killing process with pid 79212 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79212' 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@967 -- # kill 79212 00:19:40.685 18:26:52 ftl -- common/autotest_common.sh@972 -- # wait 79212 00:19:43.219 18:26:55 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:43.219 18:26:55 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:43.219 18:26:55 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:43.219 18:26:55 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.219 18:26:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:43.219 ************************************ 00:19:43.219 START TEST ftl_fio_basic 00:19:43.219 ************************************ 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:43.219 * Looking for test storage... 00:19:43.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.219 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79352 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79352 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 79352 ']' 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.220 18:26:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:43.478 [2024-07-22 18:26:55.273117] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:43.478 [2024-07-22 18:26:55.273298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79352 ] 00:19:43.478 [2024-07-22 18:26:55.451854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.737 [2024-07-22 18:26:55.729919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.737 [2024-07-22 18:26:55.730032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.737 [2024-07-22 18:26:55.730039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:44.692 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.958 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.958 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:44.958 18:26:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.958 18:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:44.958 18:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:44.958 18:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:44.958 18:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:44.959 18:26:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:45.217 18:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:45.217 { 00:19:45.217 "name": "nvme0n1", 00:19:45.217 "aliases": [ 00:19:45.217 "59a8d9ee-ddaf-4067-b0bd-da4165d702e6" 00:19:45.217 ], 00:19:45.217 "product_name": "NVMe disk", 00:19:45.217 "block_size": 4096, 00:19:45.217 "num_blocks": 1310720, 00:19:45.217 "uuid": "59a8d9ee-ddaf-4067-b0bd-da4165d702e6", 00:19:45.217 "assigned_rate_limits": { 00:19:45.217 "rw_ios_per_sec": 0, 00:19:45.217 "rw_mbytes_per_sec": 0, 00:19:45.217 "r_mbytes_per_sec": 0, 00:19:45.217 "w_mbytes_per_sec": 0 00:19:45.217 }, 00:19:45.217 "claimed": false, 00:19:45.217 "zoned": false, 00:19:45.217 "supported_io_types": { 00:19:45.217 "read": true, 00:19:45.217 "write": true, 00:19:45.217 "unmap": true, 00:19:45.217 "flush": true, 00:19:45.217 "reset": true, 00:19:45.217 "nvme_admin": true, 00:19:45.217 "nvme_io": true, 00:19:45.217 "nvme_io_md": false, 00:19:45.217 "write_zeroes": true, 00:19:45.217 "zcopy": false, 00:19:45.217 "get_zone_info": false, 00:19:45.217 "zone_management": false, 00:19:45.217 "zone_append": false, 00:19:45.217 "compare": true, 00:19:45.217 "compare_and_write": false, 00:19:45.217 "abort": true, 00:19:45.217 "seek_hole": false, 00:19:45.218 "seek_data": false, 00:19:45.218 "copy": true, 00:19:45.218 "nvme_iov_md": false 00:19:45.218 }, 00:19:45.218 "driver_specific": { 00:19:45.218 "nvme": [ 00:19:45.218 { 00:19:45.218 "pci_address": "0000:00:11.0", 00:19:45.218 "trid": { 00:19:45.218 "trtype": "PCIe", 00:19:45.218 "traddr": "0000:00:11.0" 00:19:45.218 }, 00:19:45.218 "ctrlr_data": { 00:19:45.218 "cntlid": 0, 00:19:45.218 "vendor_id": "0x1b36", 00:19:45.218 "model_number": "QEMU NVMe Ctrl", 00:19:45.218 "serial_number": "12341", 00:19:45.218 "firmware_revision": "8.0.0", 00:19:45.218 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:45.218 "oacs": { 00:19:45.218 "security": 0, 00:19:45.218 "format": 1, 00:19:45.218 "firmware": 0, 00:19:45.218 "ns_manage": 1 00:19:45.218 }, 00:19:45.218 "multi_ctrlr": false, 00:19:45.218 "ana_reporting": false 00:19:45.218 }, 00:19:45.218 "vs": { 00:19:45.218 "nvme_version": "1.4" 00:19:45.218 }, 00:19:45.218 "ns_data": { 00:19:45.218 "id": 1, 00:19:45.218 "can_share": false 00:19:45.218 } 00:19:45.218 } 00:19:45.218 ], 00:19:45.218 "mp_policy": "active_passive" 00:19:45.218 } 00:19:45.218 } 00:19:45.218 ]' 00:19:45.218 18:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:45.218 18:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:45.218 18:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:45.476 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:45.734 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:45.734 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:45.993 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=2197ba84-b48c-4112-93fa-66b22c2ad4b0 00:19:45.993 18:26:57 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2197ba84-b48c-4112-93fa-66b22c2ad4b0 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:46.252 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:46.511 { 00:19:46.511 "name": "5de30a7a-e4bb-4864-9e39-f4a656e92dae", 00:19:46.511 "aliases": [ 00:19:46.511 "lvs/nvme0n1p0" 00:19:46.511 ], 00:19:46.511 "product_name": "Logical Volume", 00:19:46.511 "block_size": 4096, 00:19:46.511 "num_blocks": 26476544, 00:19:46.511 "uuid": "5de30a7a-e4bb-4864-9e39-f4a656e92dae", 00:19:46.511 "assigned_rate_limits": { 00:19:46.511 "rw_ios_per_sec": 0, 00:19:46.511 "rw_mbytes_per_sec": 0, 00:19:46.511 "r_mbytes_per_sec": 0, 00:19:46.511 "w_mbytes_per_sec": 0 00:19:46.511 }, 00:19:46.511 "claimed": false, 00:19:46.511 "zoned": false, 00:19:46.511 "supported_io_types": { 00:19:46.511 "read": true, 00:19:46.511 "write": true, 00:19:46.511 "unmap": true, 00:19:46.511 "flush": false, 00:19:46.511 "reset": true, 00:19:46.511 "nvme_admin": false, 00:19:46.511 "nvme_io": false, 00:19:46.511 "nvme_io_md": false, 00:19:46.511 "write_zeroes": true, 00:19:46.511 "zcopy": false, 00:19:46.511 "get_zone_info": false, 00:19:46.511 "zone_management": false, 00:19:46.511 "zone_append": false, 00:19:46.511 "compare": false, 00:19:46.511 "compare_and_write": false, 00:19:46.511 "abort": false, 00:19:46.511 "seek_hole": true, 00:19:46.511 "seek_data": true, 00:19:46.511 "copy": false, 00:19:46.511 "nvme_iov_md": false 00:19:46.511 }, 00:19:46.511 "driver_specific": { 00:19:46.511 "lvol": { 00:19:46.511 "lvol_store_uuid": "2197ba84-b48c-4112-93fa-66b22c2ad4b0", 00:19:46.511 "base_bdev": "nvme0n1", 00:19:46.511 "thin_provision": true, 00:19:46.511 "num_allocated_clusters": 0, 00:19:46.511 "snapshot": false, 00:19:46.511 "clone": false, 00:19:46.511 "esnap_clone": false 00:19:46.511 } 00:19:46.511 } 00:19:46.511 } 00:19:46.511 ]' 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:46.511 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:46.769 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:47.028 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:47.028 { 00:19:47.028 "name": "5de30a7a-e4bb-4864-9e39-f4a656e92dae", 00:19:47.028 "aliases": [ 00:19:47.028 "lvs/nvme0n1p0" 00:19:47.028 ], 00:19:47.028 "product_name": "Logical Volume", 00:19:47.028 "block_size": 4096, 00:19:47.028 "num_blocks": 26476544, 00:19:47.028 "uuid": "5de30a7a-e4bb-4864-9e39-f4a656e92dae", 00:19:47.028 "assigned_rate_limits": { 00:19:47.028 "rw_ios_per_sec": 0, 00:19:47.028 "rw_mbytes_per_sec": 0, 00:19:47.028 "r_mbytes_per_sec": 0, 00:19:47.028 "w_mbytes_per_sec": 0 00:19:47.028 }, 00:19:47.028 "claimed": false, 00:19:47.028 "zoned": false, 00:19:47.028 "supported_io_types": { 00:19:47.028 "read": true, 00:19:47.028 "write": true, 00:19:47.028 "unmap": true, 00:19:47.029 "flush": false, 00:19:47.029 "reset": true, 00:19:47.029 "nvme_admin": false, 00:19:47.029 "nvme_io": false, 00:19:47.029 "nvme_io_md": false, 00:19:47.029 "write_zeroes": true, 00:19:47.029 "zcopy": false, 00:19:47.029 "get_zone_info": false, 00:19:47.029 "zone_management": false, 00:19:47.029 "zone_append": false, 00:19:47.029 "compare": false, 00:19:47.029 "compare_and_write": false, 00:19:47.029 "abort": false, 00:19:47.029 "seek_hole": true, 00:19:47.029 "seek_data": true, 00:19:47.029 "copy": false, 00:19:47.029 "nvme_iov_md": false 00:19:47.029 }, 00:19:47.029 "driver_specific": { 00:19:47.029 "lvol": { 00:19:47.029 "lvol_store_uuid": "2197ba84-b48c-4112-93fa-66b22c2ad4b0", 00:19:47.029 "base_bdev": "nvme0n1", 00:19:47.029 "thin_provision": true, 00:19:47.029 "num_allocated_clusters": 0, 00:19:47.029 "snapshot": false, 00:19:47.029 "clone": false, 00:19:47.029 "esnap_clone": false 00:19:47.029 } 00:19:47.029 } 00:19:47.029 } 00:19:47.029 ]' 00:19:47.029 18:26:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:47.029 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:47.029 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:47.287 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:47.287 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:47.287 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:47.287 18:26:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:47.287 18:26:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:47.547 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:47.547 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5de30a7a-e4bb-4864-9e39-f4a656e92dae 00:19:47.805 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:47.806 { 00:19:47.806 "name": "5de30a7a-e4bb-4864-9e39-f4a656e92dae", 00:19:47.806 "aliases": [ 00:19:47.806 "lvs/nvme0n1p0" 00:19:47.806 ], 00:19:47.806 "product_name": "Logical Volume", 00:19:47.806 "block_size": 4096, 00:19:47.806 "num_blocks": 26476544, 00:19:47.806 "uuid": "5de30a7a-e4bb-4864-9e39-f4a656e92dae", 00:19:47.806 "assigned_rate_limits": { 00:19:47.806 "rw_ios_per_sec": 0, 00:19:47.806 "rw_mbytes_per_sec": 0, 00:19:47.806 "r_mbytes_per_sec": 0, 00:19:47.806 "w_mbytes_per_sec": 0 00:19:47.806 }, 00:19:47.806 "claimed": false, 00:19:47.806 "zoned": false, 00:19:47.806 "supported_io_types": { 00:19:47.806 "read": true, 00:19:47.806 "write": true, 00:19:47.806 "unmap": true, 00:19:47.806 "flush": false, 00:19:47.806 "reset": true, 00:19:47.806 "nvme_admin": false, 00:19:47.806 "nvme_io": false, 00:19:47.806 "nvme_io_md": false, 00:19:47.806 "write_zeroes": true, 00:19:47.806 "zcopy": false, 00:19:47.806 "get_zone_info": false, 00:19:47.806 "zone_management": false, 00:19:47.806 "zone_append": false, 00:19:47.806 "compare": false, 00:19:47.806 "compare_and_write": false, 00:19:47.806 "abort": false, 00:19:47.806 "seek_hole": true, 00:19:47.806 "seek_data": true, 00:19:47.806 "copy": false, 00:19:47.806 "nvme_iov_md": false 00:19:47.806 }, 00:19:47.806 "driver_specific": { 00:19:47.806 "lvol": { 00:19:47.806 "lvol_store_uuid": "2197ba84-b48c-4112-93fa-66b22c2ad4b0", 00:19:47.806 "base_bdev": "nvme0n1", 00:19:47.806 "thin_provision": true, 00:19:47.806 "num_allocated_clusters": 0, 00:19:47.806 "snapshot": false, 00:19:47.806 "clone": false, 00:19:47.806 "esnap_clone": false 00:19:47.806 } 00:19:47.806 } 00:19:47.806 } 00:19:47.806 ]' 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:47.806 18:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5de30a7a-e4bb-4864-9e39-f4a656e92dae -c nvc0n1p0 --l2p_dram_limit 60 00:19:48.065 [2024-07-22 18:26:59.943243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.943320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.065 [2024-07-22 18:26:59.943344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:48.065 [2024-07-22 18:26:59.943360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.943485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.943508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.065 [2024-07-22 18:26:59.943522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:48.065 [2024-07-22 18:26:59.943537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.943573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.065 [2024-07-22 18:26:59.944605] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.065 [2024-07-22 18:26:59.944640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.944662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.065 [2024-07-22 18:26:59.944675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:19:48.065 [2024-07-22 18:26:59.944707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.944852] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 521e84c5-e197-4622-8f84-1c3791197583 00:19:48.065 [2024-07-22 18:26:59.946737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.946778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:48.065 [2024-07-22 18:26:59.946798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:48.065 [2024-07-22 18:26:59.946811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.956384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.956442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.065 [2024-07-22 18:26:59.956463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.473 ms 00:19:48.065 [2024-07-22 18:26:59.956480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.956674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.956717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.065 [2024-07-22 18:26:59.956735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:48.065 [2024-07-22 18:26:59.956748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.956846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.956870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.065 [2024-07-22 18:26:59.956887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:48.065 [2024-07-22 18:26:59.956899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.956959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.065 [2024-07-22 18:26:59.962138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.962183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.065 [2024-07-22 18:26:59.962204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.193 ms 00:19:48.065 [2024-07-22 18:26:59.962219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.962275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.962294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.065 [2024-07-22 18:26:59.962308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:48.065 [2024-07-22 18:26:59.962321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.962372] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:48.065 [2024-07-22 18:26:59.962559] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:48.065 [2024-07-22 18:26:59.962583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.065 [2024-07-22 18:26:59.962613] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:48.065 [2024-07-22 18:26:59.962629] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.065 [2024-07-22 18:26:59.962645] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.065 [2024-07-22 18:26:59.962658] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:48.065 [2024-07-22 18:26:59.962674] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.065 [2024-07-22 18:26:59.962701] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:48.065 [2024-07-22 18:26:59.962720] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:48.065 [2024-07-22 18:26:59.962733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.962747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.065 [2024-07-22 18:26:59.962759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:19:48.065 [2024-07-22 18:26:59.962773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.962874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.065 [2024-07-22 18:26:59.962893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.065 [2024-07-22 18:26:59.962906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:48.065 [2024-07-22 18:26:59.962926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.065 [2024-07-22 18:26:59.963051] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.065 [2024-07-22 18:26:59.963074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.065 [2024-07-22 18:26:59.963087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.065 [2024-07-22 18:26:59.963102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.065 [2024-07-22 18:26:59.963115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.065 [2024-07-22 18:26:59.963128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.065 [2024-07-22 18:26:59.963140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:48.065 [2024-07-22 18:26:59.963153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.065 [2024-07-22 18:26:59.963164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:48.065 [2024-07-22 18:26:59.963177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.065 [2024-07-22 18:26:59.963188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.065 [2024-07-22 18:26:59.963201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:48.065 [2024-07-22 18:26:59.963212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.065 [2024-07-22 18:26:59.963227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.066 [2024-07-22 18:26:59.963239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:48.066 [2024-07-22 18:26:59.963252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.066 [2024-07-22 18:26:59.963278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:48.066 [2024-07-22 18:26:59.963289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.066 [2024-07-22 18:26:59.963314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.066 [2024-07-22 18:26:59.963338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.066 [2024-07-22 18:26:59.963351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.066 [2024-07-22 18:26:59.963375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.066 [2024-07-22 18:26:59.963399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.066 [2024-07-22 18:26:59.963425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.066 [2024-07-22 18:26:59.963439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.066 [2024-07-22 18:26:59.963469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.066 [2024-07-22 18:26:59.963480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.066 [2024-07-22 18:26:59.963507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.066 [2024-07-22 18:26:59.963521] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:48.066 [2024-07-22 18:26:59.963532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.066 [2024-07-22 18:26:59.963544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:48.066 [2024-07-22 18:26:59.963555] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:48.066 [2024-07-22 18:26:59.963571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:48.066 [2024-07-22 18:26:59.963595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:48.066 [2024-07-22 18:26:59.963606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963619] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.066 [2024-07-22 18:26:59.963631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.066 [2024-07-22 18:26:59.963697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.066 [2024-07-22 18:26:59.963712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.066 [2024-07-22 18:26:59.963727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.066 [2024-07-22 18:26:59.963738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.066 [2024-07-22 18:26:59.963755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.066 [2024-07-22 18:26:59.963766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.066 [2024-07-22 18:26:59.963779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.066 [2024-07-22 18:26:59.963791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.066 [2024-07-22 18:26:59.963809] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.066 [2024-07-22 18:26:59.963824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.066 [2024-07-22 18:26:59.963841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:48.066 [2024-07-22 18:26:59.963854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:48.066 [2024-07-22 18:26:59.963868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:48.066 [2024-07-22 18:26:59.963881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:48.066 [2024-07-22 18:26:59.963895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:48.066 [2024-07-22 18:26:59.963907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:48.066 [2024-07-22 18:26:59.963922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:48.066 [2024-07-22 18:26:59.963934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:48.066 [2024-07-22 18:26:59.963955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:48.066 [2024-07-22 18:26:59.963967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:48.066 [2024-07-22 18:26:59.963984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:48.066 [2024-07-22 18:26:59.963996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:48.066 [2024-07-22 18:26:59.964011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:48.066 [2024-07-22 18:26:59.964023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:48.066 [2024-07-22 18:26:59.964038] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.066 [2024-07-22 18:26:59.964054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.066 [2024-07-22 18:26:59.964070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.066 [2024-07-22 18:26:59.964082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.066 [2024-07-22 18:26:59.964096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.066 [2024-07-22 18:26:59.964108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.066 [2024-07-22 18:26:59.964124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.066 [2024-07-22 18:26:59.964137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.066 [2024-07-22 18:26:59.964152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:19:48.066 [2024-07-22 18:26:59.964164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.066 [2024-07-22 18:26:59.964251] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:48.066 [2024-07-22 18:26:59.964279] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:51.348 [2024-07-22 18:27:03.337349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.348 [2024-07-22 18:27:03.337429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:51.348 [2024-07-22 18:27:03.337455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3373.082 ms 00:19:51.348 [2024-07-22 18:27:03.337468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.376644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.376723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.606 [2024-07-22 18:27:03.376749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.856 ms 00:19:51.606 [2024-07-22 18:27:03.376762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.376977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.376999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.606 [2024-07-22 18:27:03.377015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:51.606 [2024-07-22 18:27:03.377028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.429374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.429454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.606 [2024-07-22 18:27:03.429483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.255 ms 00:19:51.606 [2024-07-22 18:27:03.429499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.429594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.429615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.606 [2024-07-22 18:27:03.429635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:51.606 [2024-07-22 18:27:03.429650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.430366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.430407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.606 [2024-07-22 18:27:03.430434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:19:51.606 [2024-07-22 18:27:03.430449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.430701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.430733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.606 [2024-07-22 18:27:03.430754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:19:51.606 [2024-07-22 18:27:03.430769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.453985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.454049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.606 [2024-07-22 18:27:03.454073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:19:51.606 [2024-07-22 18:27:03.454087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.468630] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:51.606 [2024-07-22 18:27:03.489979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.490073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.606 [2024-07-22 18:27:03.490095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.721 ms 00:19:51.606 [2024-07-22 18:27:03.490110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.553288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.553378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:51.606 [2024-07-22 18:27:03.553402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.104 ms 00:19:51.606 [2024-07-22 18:27:03.553417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.553728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.553772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.606 [2024-07-22 18:27:03.553789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:51.606 [2024-07-22 18:27:03.553808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.584763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.584831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:51.606 [2024-07-22 18:27:03.584852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.862 ms 00:19:51.606 [2024-07-22 18:27:03.584876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.615663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.615752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:51.606 [2024-07-22 18:27:03.615776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.713 ms 00:19:51.606 [2024-07-22 18:27:03.615792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.606 [2024-07-22 18:27:03.616657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.606 [2024-07-22 18:27:03.616707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.606 [2024-07-22 18:27:03.616725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:19:51.606 [2024-07-22 18:27:03.616742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.864 [2024-07-22 18:27:03.706553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.864 [2024-07-22 18:27:03.706640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:51.864 [2024-07-22 18:27:03.706668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.712 ms 00:19:51.864 [2024-07-22 18:27:03.706711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.864 [2024-07-22 18:27:03.740463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.864 [2024-07-22 18:27:03.740536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:51.864 [2024-07-22 18:27:03.740558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.696 ms 00:19:51.864 [2024-07-22 18:27:03.740574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.864 [2024-07-22 18:27:03.773229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.864 [2024-07-22 18:27:03.773321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:51.864 [2024-07-22 18:27:03.773343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.597 ms 00:19:51.864 [2024-07-22 18:27:03.773358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.864 [2024-07-22 18:27:03.805194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.864 [2024-07-22 18:27:03.805267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.864 [2024-07-22 18:27:03.805289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.783 ms 00:19:51.865 [2024-07-22 18:27:03.805305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.865 [2024-07-22 18:27:03.805375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.865 [2024-07-22 18:27:03.805404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.865 [2024-07-22 18:27:03.805419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:51.865 [2024-07-22 18:27:03.805437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.865 [2024-07-22 18:27:03.805576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.865 [2024-07-22 18:27:03.805601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.865 [2024-07-22 18:27:03.805615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:51.865 [2024-07-22 18:27:03.805630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.865 [2024-07-22 18:27:03.807050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3863.262 ms, result 0 00:19:51.865 { 00:19:51.865 "name": "ftl0", 00:19:51.865 "uuid": "521e84c5-e197-4622-8f84-1c3791197583" 00:19:51.865 } 00:19:51.865 18:27:03 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:51.865 18:27:03 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:19:51.865 18:27:03 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:51.865 18:27:03 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:19:51.865 18:27:03 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:51.865 18:27:03 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:51.865 18:27:03 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:52.123 18:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:52.381 [ 00:19:52.381 { 00:19:52.381 "name": "ftl0", 00:19:52.381 "aliases": [ 00:19:52.381 "521e84c5-e197-4622-8f84-1c3791197583" 00:19:52.381 ], 00:19:52.381 "product_name": "FTL disk", 00:19:52.381 "block_size": 4096, 00:19:52.381 "num_blocks": 20971520, 00:19:52.381 "uuid": "521e84c5-e197-4622-8f84-1c3791197583", 00:19:52.381 "assigned_rate_limits": { 00:19:52.381 "rw_ios_per_sec": 0, 00:19:52.381 "rw_mbytes_per_sec": 0, 00:19:52.381 "r_mbytes_per_sec": 0, 00:19:52.381 "w_mbytes_per_sec": 0 00:19:52.381 }, 00:19:52.381 "claimed": false, 00:19:52.381 "zoned": false, 00:19:52.381 "supported_io_types": { 00:19:52.381 "read": true, 00:19:52.381 "write": true, 00:19:52.381 "unmap": true, 00:19:52.381 "flush": true, 00:19:52.381 "reset": false, 00:19:52.381 "nvme_admin": false, 00:19:52.381 "nvme_io": false, 00:19:52.381 "nvme_io_md": false, 00:19:52.381 "write_zeroes": true, 00:19:52.381 "zcopy": false, 00:19:52.381 "get_zone_info": false, 00:19:52.381 "zone_management": false, 00:19:52.381 "zone_append": false, 00:19:52.381 "compare": false, 00:19:52.381 "compare_and_write": false, 00:19:52.381 "abort": false, 00:19:52.381 "seek_hole": false, 00:19:52.381 "seek_data": false, 00:19:52.381 "copy": false, 00:19:52.381 "nvme_iov_md": false 00:19:52.381 }, 00:19:52.381 "driver_specific": { 00:19:52.381 "ftl": { 00:19:52.381 "base_bdev": "5de30a7a-e4bb-4864-9e39-f4a656e92dae", 00:19:52.381 "cache": "nvc0n1p0" 00:19:52.381 } 00:19:52.381 } 00:19:52.381 } 00:19:52.381 ] 00:19:52.381 18:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:19:52.381 18:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:52.381 18:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:52.639 18:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:52.639 18:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:52.896 [2024-07-22 18:27:04.779805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.896 [2024-07-22 18:27:04.779874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:52.896 [2024-07-22 18:27:04.779900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:52.896 [2024-07-22 18:27:04.779919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.896 [2024-07-22 18:27:04.779964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.896 [2024-07-22 18:27:04.783623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.896 [2024-07-22 18:27:04.783666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:52.896 [2024-07-22 18:27:04.783700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:19:52.896 [2024-07-22 18:27:04.783718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.896 [2024-07-22 18:27:04.784257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.784296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:52.897 [2024-07-22 18:27:04.784313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:19:52.897 [2024-07-22 18:27:04.784328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.897 [2024-07-22 18:27:04.787533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.787573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:52.897 [2024-07-22 18:27:04.787590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:19:52.897 [2024-07-22 18:27:04.787604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.897 [2024-07-22 18:27:04.794061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.794101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:52.897 [2024-07-22 18:27:04.794116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.424 ms 00:19:52.897 [2024-07-22 18:27:04.794130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.897 [2024-07-22 18:27:04.825641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.825731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:52.897 [2024-07-22 18:27:04.825752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.401 ms 00:19:52.897 [2024-07-22 18:27:04.825767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.897 [2024-07-22 18:27:04.844614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.844703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:52.897 [2024-07-22 18:27:04.844729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.769 ms 00:19:52.897 [2024-07-22 18:27:04.844745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.897 [2024-07-22 18:27:04.845069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.845107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:52.897 [2024-07-22 18:27:04.845122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:19:52.897 [2024-07-22 18:27:04.845137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.897 [2024-07-22 18:27:04.877274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.877355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:52.897 [2024-07-22 18:27:04.877378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.098 ms 00:19:52.897 [2024-07-22 18:27:04.877394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.897 [2024-07-22 18:27:04.907948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.897 [2024-07-22 18:27:04.908045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:52.897 [2024-07-22 18:27:04.908080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.470 ms 00:19:52.897 [2024-07-22 18:27:04.908106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.156 [2024-07-22 18:27:04.938671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.156 [2024-07-22 18:27:04.938754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.156 [2024-07-22 18:27:04.938781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.489 ms 00:19:53.156 [2024-07-22 18:27:04.938796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.156 [2024-07-22 18:27:04.969697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.156 [2024-07-22 18:27:04.969762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.156 [2024-07-22 18:27:04.969783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.705 ms 00:19:53.156 [2024-07-22 18:27:04.969798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.156 [2024-07-22 18:27:04.969863] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.156 [2024-07-22 18:27:04.969893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.969909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.969924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.969937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.969952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.969965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.969980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.969993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.970989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.971001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.971018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.971030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.971045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.971058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.157 [2024-07-22 18:27:04.971072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.158 [2024-07-22 18:27:04.971424] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.158 [2024-07-22 18:27:04.971437] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 521e84c5-e197-4622-8f84-1c3791197583 00:19:53.158 [2024-07-22 18:27:04.971454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.158 [2024-07-22 18:27:04.971466] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.158 [2024-07-22 18:27:04.971486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.158 [2024-07-22 18:27:04.971499] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.158 [2024-07-22 18:27:04.971512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.158 [2024-07-22 18:27:04.971524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.158 [2024-07-22 18:27:04.971539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.158 [2024-07-22 18:27:04.971550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.158 [2024-07-22 18:27:04.971564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.158 [2024-07-22 18:27:04.971576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.158 [2024-07-22 18:27:04.971590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.158 [2024-07-22 18:27:04.971604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:19:53.158 [2024-07-22 18:27:04.971618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.158 [2024-07-22 18:27:04.988821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.158 [2024-07-22 18:27:04.988878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.158 [2024-07-22 18:27:04.988898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.089 ms 00:19:53.158 [2024-07-22 18:27:04.988913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.158 [2024-07-22 18:27:04.989399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.158 [2024-07-22 18:27:04.989443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.158 [2024-07-22 18:27:04.989459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:19:53.158 [2024-07-22 18:27:04.989475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.158 [2024-07-22 18:27:05.048720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.158 [2024-07-22 18:27:05.048792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.158 [2024-07-22 18:27:05.048812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.158 [2024-07-22 18:27:05.048828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.158 [2024-07-22 18:27:05.048929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.158 [2024-07-22 18:27:05.048950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.158 [2024-07-22 18:27:05.048963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.158 [2024-07-22 18:27:05.048978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.158 [2024-07-22 18:27:05.049133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.158 [2024-07-22 18:27:05.049157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.158 [2024-07-22 18:27:05.049172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.158 [2024-07-22 18:27:05.049186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.158 [2024-07-22 18:27:05.049221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.158 [2024-07-22 18:27:05.049242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.158 [2024-07-22 18:27:05.049255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.158 [2024-07-22 18:27:05.049270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.158 [2024-07-22 18:27:05.161291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.158 [2024-07-22 18:27:05.161365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.158 [2024-07-22 18:27:05.161386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.158 [2024-07-22 18:27:05.161402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.416 [2024-07-22 18:27:05.249897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.416 [2024-07-22 18:27:05.249982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.417 [2024-07-22 18:27:05.250005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.417 [2024-07-22 18:27:05.250021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.417 [2024-07-22 18:27:05.250143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.417 [2024-07-22 18:27:05.250168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.417 [2024-07-22 18:27:05.250186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.417 [2024-07-22 18:27:05.250201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.417 [2024-07-22 18:27:05.250291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.417 [2024-07-22 18:27:05.250317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.417 [2024-07-22 18:27:05.250331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.417 [2024-07-22 18:27:05.250345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.417 [2024-07-22 18:27:05.250486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.417 [2024-07-22 18:27:05.250512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.417 [2024-07-22 18:27:05.250529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.417 [2024-07-22 18:27:05.250543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.417 [2024-07-22 18:27:05.250613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.417 [2024-07-22 18:27:05.250637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.417 [2024-07-22 18:27:05.250651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.417 [2024-07-22 18:27:05.250665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.417 [2024-07-22 18:27:05.250750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.417 [2024-07-22 18:27:05.250775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.417 [2024-07-22 18:27:05.250798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.417 [2024-07-22 18:27:05.250816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.417 [2024-07-22 18:27:05.250883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.417 [2024-07-22 18:27:05.250908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.417 [2024-07-22 18:27:05.250922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.417 [2024-07-22 18:27:05.250936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.417 [2024-07-22 18:27:05.251137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 471.309 ms, result 0 00:19:53.417 true 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79352 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 79352 ']' 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 79352 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79352 00:19:53.417 killing process with pid 79352 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79352' 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 79352 00:19:53.417 18:27:05 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 79352 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:58.681 18:27:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:58.681 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:58.681 fio-3.35 00:19:58.681 Starting 1 thread 00:20:03.984 00:20:03.984 test: (groupid=0, jobs=1): err= 0: pid=79574: Mon Jul 22 18:27:15 2024 00:20:03.984 read: IOPS=1042, BW=69.3MiB/s (72.6MB/s)(255MiB/3675msec) 00:20:03.984 slat (nsec): min=6024, max=50571, avg=8118.58, stdev=3536.24 00:20:03.984 clat (usec): min=298, max=753, avg=423.79, stdev=55.30 00:20:03.984 lat (usec): min=305, max=760, avg=431.91, stdev=55.93 00:20:03.984 clat percentiles (usec): 00:20:03.984 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 371], 00:20:03.984 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 441], 00:20:03.984 | 70.00th=[ 445], 80.00th=[ 457], 90.00th=[ 506], 95.00th=[ 529], 00:20:03.984 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 717], 99.95th=[ 725], 00:20:03.984 | 99.99th=[ 758] 00:20:03.984 write: IOPS=1050, BW=69.8MiB/s (73.1MB/s)(256MiB/3671msec); 0 zone resets 00:20:03.984 slat (usec): min=20, max=109, avg=25.33, stdev= 6.46 00:20:03.984 clat (usec): min=344, max=1114, avg=483.82, stdev=62.17 00:20:03.984 lat (usec): min=376, max=1138, avg=509.16, stdev=62.12 00:20:03.984 clat percentiles (usec): 00:20:03.984 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 429], 00:20:03.984 | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 474], 60.00th=[ 486], 00:20:03.984 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 603], 00:20:03.984 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 758], 99.95th=[ 840], 00:20:03.984 | 99.99th=[ 1123] 00:20:03.984 bw ( KiB/s): min=67864, max=75616, per=99.77%, avg=71264.00, stdev=2484.25, samples=7 00:20:03.984 iops : min= 998, max= 1112, avg=1048.00, stdev=36.53, samples=7 00:20:03.984 lat (usec) : 500=79.20%, 750=20.72%, 1000=0.07% 00:20:03.984 lat (msec) : 2=0.01% 00:20:03.984 cpu : usr=99.16%, sys=0.11%, ctx=9, majf=0, minf=1172 00:20:03.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.984 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.984 00:20:03.984 Run status group 0 (all jobs): 00:20:03.984 READ: bw=69.3MiB/s (72.6MB/s), 69.3MiB/s-69.3MiB/s (72.6MB/s-72.6MB/s), io=255MiB (267MB), run=3675-3675msec 00:20:03.984 WRITE: bw=69.8MiB/s (73.1MB/s), 69.8MiB/s-69.8MiB/s (73.1MB/s-73.1MB/s), io=256MiB (269MB), run=3671-3671msec 00:20:04.919 ----------------------------------------------------- 00:20:04.919 Suppressions used: 00:20:04.919 count bytes template 00:20:04.919 1 5 /usr/src/fio/parse.c 00:20:04.919 1 8 libtcmalloc_minimal.so 00:20:04.919 1 904 libcrypto.so 00:20:04.919 ----------------------------------------------------- 00:20:04.919 00:20:04.919 18:27:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:04.919 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.919 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:05.178 18:27:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:05.178 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:05.178 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:05.178 fio-3.35 00:20:05.178 Starting 2 threads 00:20:37.345 00:20:37.345 first_half: (groupid=0, jobs=1): err= 0: pid=79673: Mon Jul 22 18:27:47 2024 00:20:37.345 read: IOPS=2275, BW=9101KiB/s (9320kB/s)(256MiB/28776msec) 00:20:37.345 slat (nsec): min=4646, max=35832, avg=7661.77, stdev=1896.25 00:20:37.345 clat (usec): min=816, max=423639, avg=47056.44, stdev=30564.16 00:20:37.345 lat (usec): min=823, max=423645, avg=47064.10, stdev=30564.46 00:20:37.345 clat percentiles (msec): 00:20:37.345 | 1.00th=[ 11], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], 00:20:37.345 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 41], 00:20:37.345 | 70.00th=[ 44], 80.00th=[ 46], 90.00th=[ 53], 95.00th=[ 92], 00:20:37.345 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 300], 99.95th=[ 368], 00:20:37.345 | 99.99th=[ 418] 00:20:37.345 write: IOPS=2280, BW=9123KiB/s (9342kB/s)(256MiB/28735msec); 0 zone resets 00:20:37.345 slat (usec): min=6, max=220, avg= 9.13, stdev= 4.80 00:20:37.345 clat (usec): min=476, max=59667, avg=9152.14, stdev=8794.24 00:20:37.345 lat (usec): min=503, max=59675, avg=9161.26, stdev=8794.55 00:20:37.345 clat percentiles (usec): 00:20:37.345 | 1.00th=[ 1139], 5.00th=[ 1549], 10.00th=[ 1909], 20.00th=[ 3654], 00:20:37.345 | 30.00th=[ 5014], 40.00th=[ 6063], 50.00th=[ 6980], 60.00th=[ 7701], 00:20:37.345 | 70.00th=[ 8848], 80.00th=[11076], 90.00th=[19006], 95.00th=[23462], 00:20:37.345 | 99.00th=[46400], 99.50th=[49546], 99.90th=[53740], 99.95th=[55837], 00:20:37.345 | 99.99th=[58983] 00:20:37.345 bw ( KiB/s): min= 2520, max=48536, per=100.00%, avg=21701.33, stdev=13736.90, samples=24 00:20:37.345 iops : min= 630, max=12134, avg=5425.33, stdev=3434.22, samples=24 00:20:37.345 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.16% 00:20:37.345 lat (msec) : 2=5.29%, 4=5.71%, 10=27.49%, 20=8.40%, 50=47.05% 00:20:37.345 lat (msec) : 100=3.51%, 250=2.27%, 500=0.06% 00:20:37.345 cpu : usr=99.16%, sys=0.19%, ctx=43, majf=0, minf=5556 00:20:37.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:37.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.345 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.345 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.345 second_half: (groupid=0, jobs=1): err= 0: pid=79674: Mon Jul 22 18:27:47 2024 00:20:37.345 read: IOPS=2293, BW=9175KiB/s (9395kB/s)(256MiB/28551msec) 00:20:37.345 slat (usec): min=4, max=1182, avg= 7.72, stdev= 4.99 00:20:37.345 clat (msec): min=11, max=318, avg=47.69, stdev=28.28 00:20:37.345 lat (msec): min=11, max=318, avg=47.70, stdev=28.28 00:20:37.345 clat percentiles (msec): 00:20:37.345 | 1.00th=[ 36], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:20:37.345 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 41], 00:20:37.345 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 54], 95.00th=[ 87], 00:20:37.345 | 99.00th=[ 203], 99.50th=[ 222], 99.90th=[ 275], 99.95th=[ 288], 00:20:37.345 | 99.99th=[ 296] 00:20:37.345 write: IOPS=2310, BW=9241KiB/s (9462kB/s)(256MiB/28369msec); 0 zone resets 00:20:37.345 slat (usec): min=6, max=459, avg= 9.12, stdev= 4.96 00:20:37.345 clat (usec): min=474, max=49813, avg=8079.96, stdev=5544.52 00:20:37.345 lat (usec): min=489, max=49821, avg=8089.07, stdev=5544.92 00:20:37.345 clat percentiles (usec): 00:20:37.345 | 1.00th=[ 1336], 5.00th=[ 2245], 10.00th=[ 3228], 20.00th=[ 4146], 00:20:37.345 | 30.00th=[ 5145], 40.00th=[ 5866], 50.00th=[ 6718], 60.00th=[ 7373], 00:20:37.345 | 70.00th=[ 8356], 80.00th=[10028], 90.00th=[16909], 95.00th=[20055], 00:20:37.345 | 99.00th=[24773], 99.50th=[32375], 99.90th=[42206], 99.95th=[45876], 00:20:37.345 | 99.99th=[47973] 00:20:37.345 bw ( KiB/s): min= 344, max=45344, per=100.00%, avg=20164.92, stdev=15146.64, samples=26 00:20:37.345 iops : min= 86, max=11336, avg=5041.23, stdev=3786.66, samples=26 00:20:37.345 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.12% 00:20:37.345 lat (msec) : 2=1.60%, 4=7.19%, 10=31.07%, 20=7.33%, 50=46.42% 00:20:37.345 lat (msec) : 100=3.96%, 250=2.16%, 500=0.09% 00:20:37.345 cpu : usr=99.13%, sys=0.14%, ctx=52, majf=0, minf=5565 00:20:37.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:37.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.345 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.345 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.345 00:20:37.345 Run status group 0 (all jobs): 00:20:37.345 READ: bw=17.8MiB/s (18.6MB/s), 9101KiB/s-9175KiB/s (9320kB/s-9395kB/s), io=512MiB (536MB), run=28551-28776msec 00:20:37.345 WRITE: bw=17.8MiB/s (18.7MB/s), 9123KiB/s-9241KiB/s (9342kB/s-9462kB/s), io=512MiB (537MB), run=28369-28735msec 00:20:37.912 ----------------------------------------------------- 00:20:37.912 Suppressions used: 00:20:37.912 count bytes template 00:20:37.912 2 10 /usr/src/fio/parse.c 00:20:37.912 3 288 /usr/src/fio/iolog.c 00:20:37.912 1 8 libtcmalloc_minimal.so 00:20:37.912 1 904 libcrypto.so 00:20:37.912 ----------------------------------------------------- 00:20:37.912 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:37.912 18:27:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:38.170 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:38.170 fio-3.35 00:20:38.170 Starting 1 thread 00:20:56.276 00:20:56.276 test: (groupid=0, jobs=1): err= 0: pid=80032: Mon Jul 22 18:28:08 2024 00:20:56.276 read: IOPS=6379, BW=24.9MiB/s (26.1MB/s)(255MiB/10221msec) 00:20:56.276 slat (usec): min=4, max=164, avg= 7.23, stdev= 2.21 00:20:56.276 clat (usec): min=832, max=40041, avg=20053.25, stdev=1477.78 00:20:56.276 lat (usec): min=838, max=40049, avg=20060.48, stdev=1477.96 00:20:56.276 clat percentiles (usec): 00:20:56.276 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19006], 20.00th=[19268], 00:20:56.276 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:20:56.276 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21627], 95.00th=[23200], 00:20:56.276 | 99.00th=[24773], 99.50th=[26346], 99.90th=[30278], 99.95th=[35390], 00:20:56.276 | 99.99th=[39060] 00:20:56.276 write: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(256MiB/6402msec); 0 zone resets 00:20:56.276 slat (usec): min=6, max=173, avg=11.96, stdev= 5.69 00:20:56.276 clat (usec): min=705, max=68150, avg=12436.46, stdev=15187.41 00:20:56.276 lat (usec): min=714, max=68161, avg=12448.42, stdev=15187.45 00:20:56.276 clat percentiles (usec): 00:20:56.276 | 1.00th=[ 1057], 5.00th=[ 1270], 10.00th=[ 1401], 20.00th=[ 1614], 00:20:56.276 | 30.00th=[ 1827], 40.00th=[ 2376], 50.00th=[ 8455], 60.00th=[ 9765], 00:20:56.276 | 70.00th=[11469], 80.00th=[13829], 90.00th=[44303], 95.00th=[46924], 00:20:56.276 | 99.00th=[51643], 99.50th=[54789], 99.90th=[61604], 99.95th=[63701], 00:20:56.276 | 99.99th=[65799] 00:20:56.276 bw ( KiB/s): min=30256, max=55624, per=98.49%, avg=40329.85, stdev=7551.72, samples=13 00:20:56.276 iops : min= 7564, max=13906, avg=10082.46, stdev=1887.93, samples=13 00:20:56.276 lat (usec) : 750=0.01%, 1000=0.26% 00:20:56.276 lat (msec) : 2=17.43%, 4=3.18%, 10=10.02%, 20=44.98%, 50=23.21% 00:20:56.276 lat (msec) : 100=0.91% 00:20:56.276 cpu : usr=98.95%, sys=0.24%, ctx=21, majf=0, minf=5568 00:20:56.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:56.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.276 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.276 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.276 00:20:56.276 Run status group 0 (all jobs): 00:20:56.276 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=255MiB (267MB), run=10221-10221msec 00:20:56.276 WRITE: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=256MiB (268MB), run=6402-6402msec 00:20:58.179 ----------------------------------------------------- 00:20:58.179 Suppressions used: 00:20:58.179 count bytes template 00:20:58.179 1 5 /usr/src/fio/parse.c 00:20:58.179 2 192 /usr/src/fio/iolog.c 00:20:58.179 1 8 libtcmalloc_minimal.so 00:20:58.179 1 904 libcrypto.so 00:20:58.179 ----------------------------------------------------- 00:20:58.179 00:20:58.179 18:28:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:58.179 18:28:10 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:58.179 18:28:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:58.437 Remove shared memory files 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62062 /dev/shm/spdk_tgt_trace.pid78286 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:58.437 ************************************ 00:20:58.437 END TEST ftl_fio_basic 00:20:58.437 ************************************ 00:20:58.437 00:20:58.437 real 1m15.178s 00:20:58.437 user 2m46.757s 00:20:58.437 sys 0m4.092s 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.437 18:28:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:58.437 18:28:10 ftl -- common/autotest_common.sh@1142 -- # return 0 00:20:58.437 18:28:10 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:58.437 18:28:10 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:58.437 18:28:10 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.437 18:28:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:58.437 ************************************ 00:20:58.437 START TEST ftl_bdevperf 00:20:58.437 ************************************ 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:58.437 * Looking for test storage... 00:20:58.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.437 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80298 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80298 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 80298 ']' 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.438 18:28:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:58.696 [2024-07-22 18:28:10.477564] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:58.696 [2024-07-22 18:28:10.477856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80298 ] 00:20:58.696 [2024-07-22 18:28:10.654573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.955 [2024-07-22 18:28:10.898441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:59.521 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:59.779 18:28:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:00.037 18:28:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:00.037 { 00:21:00.037 "name": "nvme0n1", 00:21:00.037 "aliases": [ 00:21:00.037 "7280ae56-a497-4d7c-92c3-293cf99c227d" 00:21:00.037 ], 00:21:00.037 "product_name": "NVMe disk", 00:21:00.037 "block_size": 4096, 00:21:00.037 "num_blocks": 1310720, 00:21:00.037 "uuid": "7280ae56-a497-4d7c-92c3-293cf99c227d", 00:21:00.037 "assigned_rate_limits": { 00:21:00.037 "rw_ios_per_sec": 0, 00:21:00.037 "rw_mbytes_per_sec": 0, 00:21:00.037 "r_mbytes_per_sec": 0, 00:21:00.037 "w_mbytes_per_sec": 0 00:21:00.037 }, 00:21:00.037 "claimed": true, 00:21:00.037 "claim_type": "read_many_write_one", 00:21:00.037 "zoned": false, 00:21:00.037 "supported_io_types": { 00:21:00.037 "read": true, 00:21:00.037 "write": true, 00:21:00.037 "unmap": true, 00:21:00.037 "flush": true, 00:21:00.037 "reset": true, 00:21:00.037 "nvme_admin": true, 00:21:00.037 "nvme_io": true, 00:21:00.037 "nvme_io_md": false, 00:21:00.037 "write_zeroes": true, 00:21:00.037 "zcopy": false, 00:21:00.037 "get_zone_info": false, 00:21:00.037 "zone_management": false, 00:21:00.037 "zone_append": false, 00:21:00.037 "compare": true, 00:21:00.037 "compare_and_write": false, 00:21:00.037 "abort": true, 00:21:00.037 "seek_hole": false, 00:21:00.037 "seek_data": false, 00:21:00.037 "copy": true, 00:21:00.037 "nvme_iov_md": false 00:21:00.037 }, 00:21:00.037 "driver_specific": { 00:21:00.037 "nvme": [ 00:21:00.037 { 00:21:00.037 "pci_address": "0000:00:11.0", 00:21:00.037 "trid": { 00:21:00.037 "trtype": "PCIe", 00:21:00.037 "traddr": "0000:00:11.0" 00:21:00.037 }, 00:21:00.037 "ctrlr_data": { 00:21:00.037 "cntlid": 0, 00:21:00.037 "vendor_id": "0x1b36", 00:21:00.037 "model_number": "QEMU NVMe Ctrl", 00:21:00.037 "serial_number": "12341", 00:21:00.037 "firmware_revision": "8.0.0", 00:21:00.037 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:00.037 "oacs": { 00:21:00.037 "security": 0, 00:21:00.037 "format": 1, 00:21:00.037 "firmware": 0, 00:21:00.037 "ns_manage": 1 00:21:00.037 }, 00:21:00.037 "multi_ctrlr": false, 00:21:00.037 "ana_reporting": false 00:21:00.037 }, 00:21:00.037 "vs": { 00:21:00.037 "nvme_version": "1.4" 00:21:00.037 }, 00:21:00.037 "ns_data": { 00:21:00.037 "id": 1, 00:21:00.037 "can_share": false 00:21:00.037 } 00:21:00.037 } 00:21:00.037 ], 00:21:00.037 "mp_policy": "active_passive" 00:21:00.037 } 00:21:00.037 } 00:21:00.037 ]' 00:21:00.037 18:28:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:00.295 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:00.554 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=2197ba84-b48c-4112-93fa-66b22c2ad4b0 00:21:00.554 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:00.554 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2197ba84-b48c-4112-93fa-66b22c2ad4b0 00:21:00.811 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:01.070 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9443f8e8-9dc3-4955-9360-e2e7188745cf 00:21:01.070 18:28:12 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9443f8e8-9dc3-4955-9360-e2e7188745cf 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:01.329 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.587 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:01.588 { 00:21:01.588 "name": "3a4c06fe-7e06-4770-9d6c-a79557ef52d6", 00:21:01.588 "aliases": [ 00:21:01.588 "lvs/nvme0n1p0" 00:21:01.588 ], 00:21:01.588 "product_name": "Logical Volume", 00:21:01.588 "block_size": 4096, 00:21:01.588 "num_blocks": 26476544, 00:21:01.588 "uuid": "3a4c06fe-7e06-4770-9d6c-a79557ef52d6", 00:21:01.588 "assigned_rate_limits": { 00:21:01.588 "rw_ios_per_sec": 0, 00:21:01.588 "rw_mbytes_per_sec": 0, 00:21:01.588 "r_mbytes_per_sec": 0, 00:21:01.588 "w_mbytes_per_sec": 0 00:21:01.588 }, 00:21:01.588 "claimed": false, 00:21:01.588 "zoned": false, 00:21:01.588 "supported_io_types": { 00:21:01.588 "read": true, 00:21:01.588 "write": true, 00:21:01.588 "unmap": true, 00:21:01.588 "flush": false, 00:21:01.588 "reset": true, 00:21:01.588 "nvme_admin": false, 00:21:01.588 "nvme_io": false, 00:21:01.588 "nvme_io_md": false, 00:21:01.588 "write_zeroes": true, 00:21:01.588 "zcopy": false, 00:21:01.588 "get_zone_info": false, 00:21:01.588 "zone_management": false, 00:21:01.588 "zone_append": false, 00:21:01.588 "compare": false, 00:21:01.588 "compare_and_write": false, 00:21:01.588 "abort": false, 00:21:01.588 "seek_hole": true, 00:21:01.588 "seek_data": true, 00:21:01.588 "copy": false, 00:21:01.588 "nvme_iov_md": false 00:21:01.588 }, 00:21:01.588 "driver_specific": { 00:21:01.588 "lvol": { 00:21:01.588 "lvol_store_uuid": "9443f8e8-9dc3-4955-9360-e2e7188745cf", 00:21:01.588 "base_bdev": "nvme0n1", 00:21:01.588 "thin_provision": true, 00:21:01.588 "num_allocated_clusters": 0, 00:21:01.588 "snapshot": false, 00:21:01.588 "clone": false, 00:21:01.588 "esnap_clone": false 00:21:01.588 } 00:21:01.588 } 00:21:01.588 } 00:21:01.588 ]' 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:01.588 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:01.846 18:28:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:02.104 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:02.104 { 00:21:02.104 "name": "3a4c06fe-7e06-4770-9d6c-a79557ef52d6", 00:21:02.105 "aliases": [ 00:21:02.105 "lvs/nvme0n1p0" 00:21:02.105 ], 00:21:02.105 "product_name": "Logical Volume", 00:21:02.105 "block_size": 4096, 00:21:02.105 "num_blocks": 26476544, 00:21:02.105 "uuid": "3a4c06fe-7e06-4770-9d6c-a79557ef52d6", 00:21:02.105 "assigned_rate_limits": { 00:21:02.105 "rw_ios_per_sec": 0, 00:21:02.105 "rw_mbytes_per_sec": 0, 00:21:02.105 "r_mbytes_per_sec": 0, 00:21:02.105 "w_mbytes_per_sec": 0 00:21:02.105 }, 00:21:02.105 "claimed": false, 00:21:02.105 "zoned": false, 00:21:02.105 "supported_io_types": { 00:21:02.105 "read": true, 00:21:02.105 "write": true, 00:21:02.105 "unmap": true, 00:21:02.105 "flush": false, 00:21:02.105 "reset": true, 00:21:02.105 "nvme_admin": false, 00:21:02.105 "nvme_io": false, 00:21:02.105 "nvme_io_md": false, 00:21:02.105 "write_zeroes": true, 00:21:02.105 "zcopy": false, 00:21:02.105 "get_zone_info": false, 00:21:02.105 "zone_management": false, 00:21:02.105 "zone_append": false, 00:21:02.105 "compare": false, 00:21:02.105 "compare_and_write": false, 00:21:02.105 "abort": false, 00:21:02.105 "seek_hole": true, 00:21:02.105 "seek_data": true, 00:21:02.105 "copy": false, 00:21:02.105 "nvme_iov_md": false 00:21:02.105 }, 00:21:02.105 "driver_specific": { 00:21:02.105 "lvol": { 00:21:02.105 "lvol_store_uuid": "9443f8e8-9dc3-4955-9360-e2e7188745cf", 00:21:02.105 "base_bdev": "nvme0n1", 00:21:02.105 "thin_provision": true, 00:21:02.105 "num_allocated_clusters": 0, 00:21:02.105 "snapshot": false, 00:21:02.105 "clone": false, 00:21:02.105 "esnap_clone": false 00:21:02.105 } 00:21:02.105 } 00:21:02.105 } 00:21:02.105 ]' 00:21:02.105 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:02.105 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:02.105 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:02.363 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:02.363 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:02.363 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:02.363 18:28:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:02.363 18:28:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:02.622 18:28:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:21:02.622 18:28:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:02.622 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:02.622 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:02.622 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:02.622 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:02.622 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:02.881 { 00:21:02.881 "name": "3a4c06fe-7e06-4770-9d6c-a79557ef52d6", 00:21:02.881 "aliases": [ 00:21:02.881 "lvs/nvme0n1p0" 00:21:02.881 ], 00:21:02.881 "product_name": "Logical Volume", 00:21:02.881 "block_size": 4096, 00:21:02.881 "num_blocks": 26476544, 00:21:02.881 "uuid": "3a4c06fe-7e06-4770-9d6c-a79557ef52d6", 00:21:02.881 "assigned_rate_limits": { 00:21:02.881 "rw_ios_per_sec": 0, 00:21:02.881 "rw_mbytes_per_sec": 0, 00:21:02.881 "r_mbytes_per_sec": 0, 00:21:02.881 "w_mbytes_per_sec": 0 00:21:02.881 }, 00:21:02.881 "claimed": false, 00:21:02.881 "zoned": false, 00:21:02.881 "supported_io_types": { 00:21:02.881 "read": true, 00:21:02.881 "write": true, 00:21:02.881 "unmap": true, 00:21:02.881 "flush": false, 00:21:02.881 "reset": true, 00:21:02.881 "nvme_admin": false, 00:21:02.881 "nvme_io": false, 00:21:02.881 "nvme_io_md": false, 00:21:02.881 "write_zeroes": true, 00:21:02.881 "zcopy": false, 00:21:02.881 "get_zone_info": false, 00:21:02.881 "zone_management": false, 00:21:02.881 "zone_append": false, 00:21:02.881 "compare": false, 00:21:02.881 "compare_and_write": false, 00:21:02.881 "abort": false, 00:21:02.881 "seek_hole": true, 00:21:02.881 "seek_data": true, 00:21:02.881 "copy": false, 00:21:02.881 "nvme_iov_md": false 00:21:02.881 }, 00:21:02.881 "driver_specific": { 00:21:02.881 "lvol": { 00:21:02.881 "lvol_store_uuid": "9443f8e8-9dc3-4955-9360-e2e7188745cf", 00:21:02.881 "base_bdev": "nvme0n1", 00:21:02.881 "thin_provision": true, 00:21:02.881 "num_allocated_clusters": 0, 00:21:02.881 "snapshot": false, 00:21:02.881 "clone": false, 00:21:02.881 "esnap_clone": false 00:21:02.881 } 00:21:02.881 } 00:21:02.881 } 00:21:02.881 ]' 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:21:02.881 18:28:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3a4c06fe-7e06-4770-9d6c-a79557ef52d6 -c nvc0n1p0 --l2p_dram_limit 20 00:21:03.140 [2024-07-22 18:28:15.025760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.140 [2024-07-22 18:28:15.025834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:03.140 [2024-07-22 18:28:15.025861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:03.140 [2024-07-22 18:28:15.025874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.140 [2024-07-22 18:28:15.025957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.140 [2024-07-22 18:28:15.025977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.140 [2024-07-22 18:28:15.025994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:03.140 [2024-07-22 18:28:15.026009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.140 [2024-07-22 18:28:15.026041] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:03.140 [2024-07-22 18:28:15.027152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:03.140 [2024-07-22 18:28:15.027196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.140 [2024-07-22 18:28:15.027214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.140 [2024-07-22 18:28:15.027229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:21:03.140 [2024-07-22 18:28:15.027241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.140 [2024-07-22 18:28:15.027382] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d94a527d-880d-47a6-b173-2f6914ffd715 00:21:03.140 [2024-07-22 18:28:15.029215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.140 [2024-07-22 18:28:15.029264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:03.140 [2024-07-22 18:28:15.029282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:03.140 [2024-07-22 18:28:15.029301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.140 [2024-07-22 18:28:15.038955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.140 [2024-07-22 18:28:15.039048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.141 [2024-07-22 18:28:15.039070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.594 ms 00:21:03.141 [2024-07-22 18:28:15.039085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.141 [2024-07-22 18:28:15.039238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.141 [2024-07-22 18:28:15.039268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.141 [2024-07-22 18:28:15.039288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:03.141 [2024-07-22 18:28:15.039307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.141 [2024-07-22 18:28:15.039428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.141 [2024-07-22 18:28:15.039453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:03.141 [2024-07-22 18:28:15.039467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:03.141 [2024-07-22 18:28:15.039481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.141 [2024-07-22 18:28:15.039514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:03.141 [2024-07-22 18:28:15.045034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.141 [2024-07-22 18:28:15.045101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.141 [2024-07-22 18:28:15.045126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.524 ms 00:21:03.141 [2024-07-22 18:28:15.045139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.141 [2024-07-22 18:28:15.045212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.141 [2024-07-22 18:28:15.045232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:03.141 [2024-07-22 18:28:15.045248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:03.141 [2024-07-22 18:28:15.045260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.141 [2024-07-22 18:28:15.045323] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:03.141 [2024-07-22 18:28:15.045491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:03.141 [2024-07-22 18:28:15.045520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:03.141 [2024-07-22 18:28:15.045538] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:03.141 [2024-07-22 18:28:15.045556] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:03.141 [2024-07-22 18:28:15.045570] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:03.141 [2024-07-22 18:28:15.045585] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:03.141 [2024-07-22 18:28:15.045597] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:03.141 [2024-07-22 18:28:15.045613] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:03.141 [2024-07-22 18:28:15.045625] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:03.141 [2024-07-22 18:28:15.045640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.141 [2024-07-22 18:28:15.045652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:03.141 [2024-07-22 18:28:15.045667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:21:03.141 [2024-07-22 18:28:15.045712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.141 [2024-07-22 18:28:15.045823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.141 [2024-07-22 18:28:15.045842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:03.141 [2024-07-22 18:28:15.045857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:03.141 [2024-07-22 18:28:15.045868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.141 [2024-07-22 18:28:15.045975] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:03.141 [2024-07-22 18:28:15.045992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:03.141 [2024-07-22 18:28:15.046007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:03.141 [2024-07-22 18:28:15.046047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:03.141 [2024-07-22 18:28:15.046085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.141 [2024-07-22 18:28:15.046108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:03.141 [2024-07-22 18:28:15.046118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:03.141 [2024-07-22 18:28:15.046131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.141 [2024-07-22 18:28:15.046142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:03.141 [2024-07-22 18:28:15.046157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:03.141 [2024-07-22 18:28:15.046167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:03.141 [2024-07-22 18:28:15.046193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:03.141 [2024-07-22 18:28:15.046251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:03.141 [2024-07-22 18:28:15.046286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:03.141 [2024-07-22 18:28:15.046323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:03.141 [2024-07-22 18:28:15.046358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:03.141 [2024-07-22 18:28:15.046398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.141 [2024-07-22 18:28:15.046421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:03.141 [2024-07-22 18:28:15.046432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:03.141 [2024-07-22 18:28:15.046445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.141 [2024-07-22 18:28:15.046455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:03.141 [2024-07-22 18:28:15.046470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:03.141 [2024-07-22 18:28:15.046481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:03.141 [2024-07-22 18:28:15.046504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:03.141 [2024-07-22 18:28:15.046517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046527] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:03.141 [2024-07-22 18:28:15.046541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:03.141 [2024-07-22 18:28:15.046553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.141 [2024-07-22 18:28:15.046578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:03.141 [2024-07-22 18:28:15.046593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:03.141 [2024-07-22 18:28:15.046604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:03.141 [2024-07-22 18:28:15.046617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:03.141 [2024-07-22 18:28:15.046628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:03.141 [2024-07-22 18:28:15.046642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:03.141 [2024-07-22 18:28:15.046660] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:03.141 [2024-07-22 18:28:15.046691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.141 [2024-07-22 18:28:15.046708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:03.141 [2024-07-22 18:28:15.046723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:03.141 [2024-07-22 18:28:15.046735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:03.141 [2024-07-22 18:28:15.046749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:03.141 [2024-07-22 18:28:15.046760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:03.141 [2024-07-22 18:28:15.046774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:03.141 [2024-07-22 18:28:15.046786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:03.141 [2024-07-22 18:28:15.046799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:03.141 [2024-07-22 18:28:15.046811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:03.141 [2024-07-22 18:28:15.046829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:03.142 [2024-07-22 18:28:15.046841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:03.142 [2024-07-22 18:28:15.046855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:03.142 [2024-07-22 18:28:15.046867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:03.142 [2024-07-22 18:28:15.046881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:03.142 [2024-07-22 18:28:15.046893] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:03.142 [2024-07-22 18:28:15.046908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.142 [2024-07-22 18:28:15.046921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:03.142 [2024-07-22 18:28:15.046934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:03.142 [2024-07-22 18:28:15.046946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:03.142 [2024-07-22 18:28:15.046960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:03.142 [2024-07-22 18:28:15.046973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.142 [2024-07-22 18:28:15.046987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:03.142 [2024-07-22 18:28:15.047003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:21:03.142 [2024-07-22 18:28:15.047016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.142 [2024-07-22 18:28:15.047065] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:03.142 [2024-07-22 18:28:15.047094] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:06.426 [2024-07-22 18:28:17.700990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.426 [2024-07-22 18:28:17.701086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:06.426 [2024-07-22 18:28:17.701109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2653.920 ms 00:21:06.426 [2024-07-22 18:28:17.701129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.426 [2024-07-22 18:28:17.751173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.426 [2024-07-22 18:28:17.751275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.426 [2024-07-22 18:28:17.751306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.762 ms 00:21:06.426 [2024-07-22 18:28:17.751328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.426 [2024-07-22 18:28:17.751597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.426 [2024-07-22 18:28:17.751625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:06.426 [2024-07-22 18:28:17.751640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:06.426 [2024-07-22 18:28:17.751657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.426 [2024-07-22 18:28:17.794919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.426 [2024-07-22 18:28:17.794997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.426 [2024-07-22 18:28:17.795019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.184 ms 00:21:06.426 [2024-07-22 18:28:17.795034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.426 [2024-07-22 18:28:17.795094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.426 [2024-07-22 18:28:17.795120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.426 [2024-07-22 18:28:17.795134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:06.426 [2024-07-22 18:28:17.795148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.426 [2024-07-22 18:28:17.795820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.426 [2024-07-22 18:28:17.795858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.426 [2024-07-22 18:28:17.795880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:21:06.426 [2024-07-22 18:28:17.795894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.426 [2024-07-22 18:28:17.796059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.426 [2024-07-22 18:28:17.796081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.426 [2024-07-22 18:28:17.796095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:21:06.426 [2024-07-22 18:28:17.796115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:17.814318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:17.814387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.427 [2024-07-22 18:28:17.814407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.178 ms 00:21:06.427 [2024-07-22 18:28:17.814422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:17.829841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:06.427 [2024-07-22 18:28:17.837592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:17.837653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:06.427 [2024-07-22 18:28:17.837692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.026 ms 00:21:06.427 [2024-07-22 18:28:17.837708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:17.911648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:17.911745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:06.427 [2024-07-22 18:28:17.911772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.876 ms 00:21:06.427 [2024-07-22 18:28:17.911785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:17.912014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:17.912033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:06.427 [2024-07-22 18:28:17.912052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:21:06.427 [2024-07-22 18:28:17.912065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:17.944010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:17.944084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:06.427 [2024-07-22 18:28:17.944109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.853 ms 00:21:06.427 [2024-07-22 18:28:17.944122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:17.975391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:17.975473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:06.427 [2024-07-22 18:28:17.975499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.206 ms 00:21:06.427 [2024-07-22 18:28:17.975512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:17.976402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:17.976435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:06.427 [2024-07-22 18:28:17.976454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:21:06.427 [2024-07-22 18:28:17.976466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:18.067222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:18.067302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:06.427 [2024-07-22 18:28:18.067332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.652 ms 00:21:06.427 [2024-07-22 18:28:18.067345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:18.101537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:18.101604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:06.427 [2024-07-22 18:28:18.101628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.126 ms 00:21:06.427 [2024-07-22 18:28:18.101641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:18.134837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:18.134932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:06.427 [2024-07-22 18:28:18.134959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.121 ms 00:21:06.427 [2024-07-22 18:28:18.134972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:18.168128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:18.168208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:06.427 [2024-07-22 18:28:18.168232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.094 ms 00:21:06.427 [2024-07-22 18:28:18.168244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:18.168309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:18.168326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:06.427 [2024-07-22 18:28:18.168346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:06.427 [2024-07-22 18:28:18.168358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:18.168507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.427 [2024-07-22 18:28:18.168528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:06.427 [2024-07-22 18:28:18.168544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:06.427 [2024-07-22 18:28:18.168556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.427 [2024-07-22 18:28:18.169968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3143.638 ms, result 0 00:21:06.427 { 00:21:06.427 "name": "ftl0", 00:21:06.427 "uuid": "d94a527d-880d-47a6-b173-2f6914ffd715" 00:21:06.427 } 00:21:06.427 18:28:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:06.427 18:28:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:21:06.427 18:28:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:21:06.686 18:28:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:06.686 [2024-07-22 18:28:18.594185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:06.686 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:06.686 Zero copy mechanism will not be used. 00:21:06.686 Running I/O for 4 seconds... 00:21:10.872 00:21:10.872 Latency(us) 00:21:10.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.872 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:10.872 ftl0 : 4.00 1902.46 126.33 0.00 0.00 549.28 232.73 2055.45 00:21:10.872 =================================================================================================================== 00:21:10.872 Total : 1902.46 126.33 0.00 0.00 549.28 232.73 2055.45 00:21:10.872 0 00:21:10.872 [2024-07-22 18:28:22.605318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:10.872 18:28:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:10.872 [2024-07-22 18:28:22.720519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:10.872 Running I/O for 4 seconds... 00:21:15.053 00:21:15.053 Latency(us) 00:21:15.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.053 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:15.053 ftl0 : 4.02 7330.51 28.63 0.00 0.00 17416.98 351.88 34078.72 00:21:15.053 =================================================================================================================== 00:21:15.053 Total : 7330.51 28.63 0.00 0.00 17416.98 0.00 34078.72 00:21:15.053 0 00:21:15.053 [2024-07-22 18:28:26.749149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:15.053 18:28:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:15.053 [2024-07-22 18:28:26.868132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:15.053 Running I/O for 4 seconds... 00:21:19.264 00:21:19.264 Latency(us) 00:21:19.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.264 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.264 Verification LBA range: start 0x0 length 0x1400000 00:21:19.264 ftl0 : 4.01 5997.97 23.43 0.00 0.00 21267.80 379.81 31457.28 00:21:19.264 =================================================================================================================== 00:21:19.264 Total : 5997.97 23.43 0.00 0.00 21267.80 0.00 31457.28 00:21:19.264 [2024-07-22 18:28:30.898726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:19.264 0 00:21:19.264 18:28:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:19.264 [2024-07-22 18:28:31.184113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.264 [2024-07-22 18:28:31.184180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:19.264 [2024-07-22 18:28:31.184207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:19.264 [2024-07-22 18:28:31.184221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.264 [2024-07-22 18:28:31.184263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:19.264 [2024-07-22 18:28:31.187904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.264 [2024-07-22 18:28:31.187944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:19.264 [2024-07-22 18:28:31.187960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.616 ms 00:21:19.264 [2024-07-22 18:28:31.187977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.264 [2024-07-22 18:28:31.189754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.264 [2024-07-22 18:28:31.189807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:19.264 [2024-07-22 18:28:31.189824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.748 ms 00:21:19.264 [2024-07-22 18:28:31.189839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-07-22 18:28:31.374274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-07-22 18:28:31.374375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:19.522 [2024-07-22 18:28:31.374398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.405 ms 00:21:19.522 [2024-07-22 18:28:31.374417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.522 [2024-07-22 18:28:31.380990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.522 [2024-07-22 18:28:31.381031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:19.522 [2024-07-22 18:28:31.381047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.525 ms 00:21:19.522 [2024-07-22 18:28:31.381061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-07-22 18:28:31.412174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-07-22 18:28:31.412227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:19.523 [2024-07-22 18:28:31.412245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.042 ms 00:21:19.523 [2024-07-22 18:28:31.412260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-07-22 18:28:31.431215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-07-22 18:28:31.431276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:19.523 [2024-07-22 18:28:31.431295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.907 ms 00:21:19.523 [2024-07-22 18:28:31.431314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-07-22 18:28:31.431514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-07-22 18:28:31.431541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:19.523 [2024-07-22 18:28:31.431556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:21:19.523 [2024-07-22 18:28:31.431574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-07-22 18:28:31.462615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-07-22 18:28:31.462661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:19.523 [2024-07-22 18:28:31.462714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.016 ms 00:21:19.523 [2024-07-22 18:28:31.462733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-07-22 18:28:31.493242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-07-22 18:28:31.493286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:19.523 [2024-07-22 18:28:31.493303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.463 ms 00:21:19.523 [2024-07-22 18:28:31.493317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-07-22 18:28:31.523131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-07-22 18:28:31.523178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:19.523 [2024-07-22 18:28:31.523195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.769 ms 00:21:19.523 [2024-07-22 18:28:31.523209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-07-22 18:28:31.553227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.782 [2024-07-22 18:28:31.553274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:19.783 [2024-07-22 18:28:31.553292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.907 ms 00:21:19.783 [2024-07-22 18:28:31.553346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.783 [2024-07-22 18:28:31.553394] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:19.783 [2024-07-22 18:28:31.553421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.553986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:19.783 [2024-07-22 18:28:31.554267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:19.784 [2024-07-22 18:28:31.554853] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:19.784 [2024-07-22 18:28:31.554865] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94a527d-880d-47a6-b173-2f6914ffd715 00:21:19.784 [2024-07-22 18:28:31.554879] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:19.784 [2024-07-22 18:28:31.554890] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:19.784 [2024-07-22 18:28:31.554904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:19.784 [2024-07-22 18:28:31.554915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:19.784 [2024-07-22 18:28:31.554932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:19.784 [2024-07-22 18:28:31.554944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:19.784 [2024-07-22 18:28:31.554957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:19.784 [2024-07-22 18:28:31.554968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:19.784 [2024-07-22 18:28:31.554983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:19.784 [2024-07-22 18:28:31.554995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.784 [2024-07-22 18:28:31.555010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:19.784 [2024-07-22 18:28:31.555023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:21:19.784 [2024-07-22 18:28:31.555037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.784 [2024-07-22 18:28:31.571949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.784 [2024-07-22 18:28:31.572001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:19.784 [2024-07-22 18:28:31.572023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.849 ms 00:21:19.784 [2024-07-22 18:28:31.572038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.784 [2024-07-22 18:28:31.572504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.784 [2024-07-22 18:28:31.572538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:19.784 [2024-07-22 18:28:31.572554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:21:19.784 [2024-07-22 18:28:31.572568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.784 [2024-07-22 18:28:31.613620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.784 [2024-07-22 18:28:31.613717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.784 [2024-07-22 18:28:31.613737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.784 [2024-07-22 18:28:31.613755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.785 [2024-07-22 18:28:31.613848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.785 [2024-07-22 18:28:31.613868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.785 [2024-07-22 18:28:31.613881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.785 [2024-07-22 18:28:31.613895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.785 [2024-07-22 18:28:31.614020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.785 [2024-07-22 18:28:31.614046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.785 [2024-07-22 18:28:31.614064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.785 [2024-07-22 18:28:31.614078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.785 [2024-07-22 18:28:31.614102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.785 [2024-07-22 18:28:31.614120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.785 [2024-07-22 18:28:31.614133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.785 [2024-07-22 18:28:31.614146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.785 [2024-07-22 18:28:31.718330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.785 [2024-07-22 18:28:31.718396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.785 [2024-07-22 18:28:31.718418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.785 [2024-07-22 18:28:31.718436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.805366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.045 [2024-07-22 18:28:31.805441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:20.045 [2024-07-22 18:28:31.805462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.045 [2024-07-22 18:28:31.805477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.805587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.045 [2024-07-22 18:28:31.805611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.045 [2024-07-22 18:28:31.805625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.045 [2024-07-22 18:28:31.805644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.805727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.045 [2024-07-22 18:28:31.805752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.045 [2024-07-22 18:28:31.805766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.045 [2024-07-22 18:28:31.805780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.805915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.045 [2024-07-22 18:28:31.805941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.045 [2024-07-22 18:28:31.805955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.045 [2024-07-22 18:28:31.805972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.806025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.045 [2024-07-22 18:28:31.806048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:20.045 [2024-07-22 18:28:31.806062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.045 [2024-07-22 18:28:31.806075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.806122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.045 [2024-07-22 18:28:31.806148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.045 [2024-07-22 18:28:31.806161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.045 [2024-07-22 18:28:31.806176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.806236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.045 [2024-07-22 18:28:31.806257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.045 [2024-07-22 18:28:31.806269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.045 [2024-07-22 18:28:31.806282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.045 [2024-07-22 18:28:31.806439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 622.287 ms, result 0 00:21:20.045 true 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80298 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 80298 ']' 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 80298 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80298 00:21:20.045 killing process with pid 80298 00:21:20.045 Received shutdown signal, test time was about 4.000000 seconds 00:21:20.045 00:21:20.045 Latency(us) 00:21:20.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.045 =================================================================================================================== 00:21:20.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80298' 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 80298 00:21:20.045 18:28:31 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 80298 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:21:24.244 Remove shared memory files 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:24.244 ************************************ 00:21:24.244 END TEST ftl_bdevperf 00:21:24.244 ************************************ 00:21:24.244 00:21:24.244 real 0m25.454s 00:21:24.244 user 0m28.822s 00:21:24.244 sys 0m1.257s 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:24.244 18:28:35 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:24.244 18:28:35 ftl -- common/autotest_common.sh@1142 -- # return 0 00:21:24.244 18:28:35 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:24.244 18:28:35 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:24.244 18:28:35 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:24.244 18:28:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:24.244 ************************************ 00:21:24.244 START TEST ftl_trim 00:21:24.244 ************************************ 00:21:24.244 18:28:35 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:24.244 * Looking for test storage... 00:21:24.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80662 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80662 00:21:24.244 18:28:35 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:24.244 18:28:35 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80662 ']' 00:21:24.244 18:28:35 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.244 18:28:35 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.244 18:28:35 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.244 18:28:35 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.244 18:28:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:24.244 [2024-07-22 18:28:36.019934] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:24.244 [2024-07-22 18:28:36.020371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80662 ] 00:21:24.244 [2024-07-22 18:28:36.201385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:24.503 [2024-07-22 18:28:36.500279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.503 [2024-07-22 18:28:36.500408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.503 [2024-07-22 18:28:36.500421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.439 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.439 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:25.439 18:28:37 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:25.439 18:28:37 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:25.439 18:28:37 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:25.439 18:28:37 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:25.439 18:28:37 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:25.439 18:28:37 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:25.698 18:28:37 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:25.698 18:28:37 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:25.698 18:28:37 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:25.698 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:25.698 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:25.698 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:25.698 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:25.698 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:25.956 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:25.956 { 00:21:25.956 "name": "nvme0n1", 00:21:25.956 "aliases": [ 00:21:25.956 "60ce4856-6f66-4f1b-b700-4427ba2799af" 00:21:25.956 ], 00:21:25.956 "product_name": "NVMe disk", 00:21:25.956 "block_size": 4096, 00:21:25.956 "num_blocks": 1310720, 00:21:25.956 "uuid": "60ce4856-6f66-4f1b-b700-4427ba2799af", 00:21:25.956 "assigned_rate_limits": { 00:21:25.956 "rw_ios_per_sec": 0, 00:21:25.956 "rw_mbytes_per_sec": 0, 00:21:25.956 "r_mbytes_per_sec": 0, 00:21:25.956 "w_mbytes_per_sec": 0 00:21:25.956 }, 00:21:25.956 "claimed": true, 00:21:25.956 "claim_type": "read_many_write_one", 00:21:25.956 "zoned": false, 00:21:25.956 "supported_io_types": { 00:21:25.956 "read": true, 00:21:25.956 "write": true, 00:21:25.956 "unmap": true, 00:21:25.956 "flush": true, 00:21:25.956 "reset": true, 00:21:25.956 "nvme_admin": true, 00:21:25.956 "nvme_io": true, 00:21:25.956 "nvme_io_md": false, 00:21:25.956 "write_zeroes": true, 00:21:25.956 "zcopy": false, 00:21:25.956 "get_zone_info": false, 00:21:25.956 "zone_management": false, 00:21:25.956 "zone_append": false, 00:21:25.956 "compare": true, 00:21:25.956 "compare_and_write": false, 00:21:25.956 "abort": true, 00:21:25.956 "seek_hole": false, 00:21:25.956 "seek_data": false, 00:21:25.956 "copy": true, 00:21:25.956 "nvme_iov_md": false 00:21:25.956 }, 00:21:25.956 "driver_specific": { 00:21:25.956 "nvme": [ 00:21:25.956 { 00:21:25.956 "pci_address": "0000:00:11.0", 00:21:25.956 "trid": { 00:21:25.956 "trtype": "PCIe", 00:21:25.956 "traddr": "0000:00:11.0" 00:21:25.956 }, 00:21:25.956 "ctrlr_data": { 00:21:25.956 "cntlid": 0, 00:21:25.956 "vendor_id": "0x1b36", 00:21:25.956 "model_number": "QEMU NVMe Ctrl", 00:21:25.956 "serial_number": "12341", 00:21:25.956 "firmware_revision": "8.0.0", 00:21:25.956 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:25.956 "oacs": { 00:21:25.956 "security": 0, 00:21:25.956 "format": 1, 00:21:25.956 "firmware": 0, 00:21:25.956 "ns_manage": 1 00:21:25.956 }, 00:21:25.956 "multi_ctrlr": false, 00:21:25.956 "ana_reporting": false 00:21:25.956 }, 00:21:25.956 "vs": { 00:21:25.956 "nvme_version": "1.4" 00:21:25.957 }, 00:21:25.957 "ns_data": { 00:21:25.957 "id": 1, 00:21:25.957 "can_share": false 00:21:25.957 } 00:21:25.957 } 00:21:25.957 ], 00:21:25.957 "mp_policy": "active_passive" 00:21:25.957 } 00:21:25.957 } 00:21:25.957 ]' 00:21:25.957 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:26.215 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:26.215 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:26.215 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:26.215 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:26.215 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:21:26.215 18:28:38 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:26.215 18:28:38 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:26.215 18:28:38 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:26.215 18:28:38 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:26.215 18:28:38 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:26.473 18:28:38 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9443f8e8-9dc3-4955-9360-e2e7188745cf 00:21:26.473 18:28:38 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:26.473 18:28:38 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9443f8e8-9dc3-4955-9360-e2e7188745cf 00:21:26.731 18:28:38 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:26.990 18:28:38 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=4f09dec7-2ace-490a-84ad-13a2ed981edf 00:21:26.990 18:28:38 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4f09dec7-2ace-490a-84ad-13a2ed981edf 00:21:27.248 18:28:39 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.248 18:28:39 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.248 18:28:39 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:27.248 18:28:39 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:27.248 18:28:39 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.248 18:28:39 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:27.248 18:28:39 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.248 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.248 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:27.248 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:27.248 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:27.248 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.507 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:27.507 { 00:21:27.507 "name": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:27.507 "aliases": [ 00:21:27.507 "lvs/nvme0n1p0" 00:21:27.507 ], 00:21:27.507 "product_name": "Logical Volume", 00:21:27.507 "block_size": 4096, 00:21:27.507 "num_blocks": 26476544, 00:21:27.507 "uuid": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:27.507 "assigned_rate_limits": { 00:21:27.507 "rw_ios_per_sec": 0, 00:21:27.507 "rw_mbytes_per_sec": 0, 00:21:27.507 "r_mbytes_per_sec": 0, 00:21:27.507 "w_mbytes_per_sec": 0 00:21:27.507 }, 00:21:27.507 "claimed": false, 00:21:27.507 "zoned": false, 00:21:27.507 "supported_io_types": { 00:21:27.507 "read": true, 00:21:27.507 "write": true, 00:21:27.507 "unmap": true, 00:21:27.507 "flush": false, 00:21:27.507 "reset": true, 00:21:27.507 "nvme_admin": false, 00:21:27.507 "nvme_io": false, 00:21:27.507 "nvme_io_md": false, 00:21:27.507 "write_zeroes": true, 00:21:27.507 "zcopy": false, 00:21:27.507 "get_zone_info": false, 00:21:27.507 "zone_management": false, 00:21:27.507 "zone_append": false, 00:21:27.507 "compare": false, 00:21:27.507 "compare_and_write": false, 00:21:27.507 "abort": false, 00:21:27.507 "seek_hole": true, 00:21:27.507 "seek_data": true, 00:21:27.507 "copy": false, 00:21:27.507 "nvme_iov_md": false 00:21:27.507 }, 00:21:27.507 "driver_specific": { 00:21:27.507 "lvol": { 00:21:27.507 "lvol_store_uuid": "4f09dec7-2ace-490a-84ad-13a2ed981edf", 00:21:27.507 "base_bdev": "nvme0n1", 00:21:27.507 "thin_provision": true, 00:21:27.507 "num_allocated_clusters": 0, 00:21:27.507 "snapshot": false, 00:21:27.507 "clone": false, 00:21:27.507 "esnap_clone": false 00:21:27.507 } 00:21:27.507 } 00:21:27.507 } 00:21:27.507 ]' 00:21:27.507 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:27.507 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:27.507 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:27.507 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:27.507 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:27.507 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:27.507 18:28:39 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:27.507 18:28:39 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:27.507 18:28:39 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:27.766 18:28:39 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:27.766 18:28:39 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:27.766 18:28:39 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.766 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:27.766 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:27.766 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:27.766 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:27.766 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:28.024 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:28.024 { 00:21:28.025 "name": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:28.025 "aliases": [ 00:21:28.025 "lvs/nvme0n1p0" 00:21:28.025 ], 00:21:28.025 "product_name": "Logical Volume", 00:21:28.025 "block_size": 4096, 00:21:28.025 "num_blocks": 26476544, 00:21:28.025 "uuid": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:28.025 "assigned_rate_limits": { 00:21:28.025 "rw_ios_per_sec": 0, 00:21:28.025 "rw_mbytes_per_sec": 0, 00:21:28.025 "r_mbytes_per_sec": 0, 00:21:28.025 "w_mbytes_per_sec": 0 00:21:28.025 }, 00:21:28.025 "claimed": false, 00:21:28.025 "zoned": false, 00:21:28.025 "supported_io_types": { 00:21:28.025 "read": true, 00:21:28.025 "write": true, 00:21:28.025 "unmap": true, 00:21:28.025 "flush": false, 00:21:28.025 "reset": true, 00:21:28.025 "nvme_admin": false, 00:21:28.025 "nvme_io": false, 00:21:28.025 "nvme_io_md": false, 00:21:28.025 "write_zeroes": true, 00:21:28.025 "zcopy": false, 00:21:28.025 "get_zone_info": false, 00:21:28.025 "zone_management": false, 00:21:28.025 "zone_append": false, 00:21:28.025 "compare": false, 00:21:28.025 "compare_and_write": false, 00:21:28.025 "abort": false, 00:21:28.025 "seek_hole": true, 00:21:28.025 "seek_data": true, 00:21:28.025 "copy": false, 00:21:28.025 "nvme_iov_md": false 00:21:28.025 }, 00:21:28.025 "driver_specific": { 00:21:28.025 "lvol": { 00:21:28.025 "lvol_store_uuid": "4f09dec7-2ace-490a-84ad-13a2ed981edf", 00:21:28.025 "base_bdev": "nvme0n1", 00:21:28.025 "thin_provision": true, 00:21:28.025 "num_allocated_clusters": 0, 00:21:28.025 "snapshot": false, 00:21:28.025 "clone": false, 00:21:28.025 "esnap_clone": false 00:21:28.025 } 00:21:28.025 } 00:21:28.025 } 00:21:28.025 ]' 00:21:28.025 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:28.025 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:28.025 18:28:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:28.283 18:28:40 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:28.283 18:28:40 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:28.283 18:28:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:28.283 18:28:40 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:28.283 18:28:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:28.283 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 00:21:28.543 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:28.543 { 00:21:28.543 "name": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:28.543 "aliases": [ 00:21:28.543 "lvs/nvme0n1p0" 00:21:28.543 ], 00:21:28.543 "product_name": "Logical Volume", 00:21:28.543 "block_size": 4096, 00:21:28.543 "num_blocks": 26476544, 00:21:28.543 "uuid": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:28.543 "assigned_rate_limits": { 00:21:28.543 "rw_ios_per_sec": 0, 00:21:28.543 "rw_mbytes_per_sec": 0, 00:21:28.543 "r_mbytes_per_sec": 0, 00:21:28.543 "w_mbytes_per_sec": 0 00:21:28.543 }, 00:21:28.543 "claimed": false, 00:21:28.543 "zoned": false, 00:21:28.543 "supported_io_types": { 00:21:28.543 "read": true, 00:21:28.543 "write": true, 00:21:28.543 "unmap": true, 00:21:28.543 "flush": false, 00:21:28.543 "reset": true, 00:21:28.543 "nvme_admin": false, 00:21:28.543 "nvme_io": false, 00:21:28.543 "nvme_io_md": false, 00:21:28.543 "write_zeroes": true, 00:21:28.543 "zcopy": false, 00:21:28.543 "get_zone_info": false, 00:21:28.543 "zone_management": false, 00:21:28.543 "zone_append": false, 00:21:28.543 "compare": false, 00:21:28.543 "compare_and_write": false, 00:21:28.543 "abort": false, 00:21:28.543 "seek_hole": true, 00:21:28.543 "seek_data": true, 00:21:28.543 "copy": false, 00:21:28.543 "nvme_iov_md": false 00:21:28.543 }, 00:21:28.543 "driver_specific": { 00:21:28.543 "lvol": { 00:21:28.543 "lvol_store_uuid": "4f09dec7-2ace-490a-84ad-13a2ed981edf", 00:21:28.543 "base_bdev": "nvme0n1", 00:21:28.543 "thin_provision": true, 00:21:28.543 "num_allocated_clusters": 0, 00:21:28.543 "snapshot": false, 00:21:28.543 "clone": false, 00:21:28.543 "esnap_clone": false 00:21:28.543 } 00:21:28.543 } 00:21:28.543 } 00:21:28.543 ]' 00:21:28.543 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:28.543 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:28.543 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:28.802 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:28.802 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:28.802 18:28:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:28.802 18:28:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:28.802 18:28:40 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3e285bb3-eb22-4913-8cb3-b02efe56d2c9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:29.112 [2024-07-22 18:28:40.855537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.855607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:29.112 [2024-07-22 18:28:40.855629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:29.112 [2024-07-22 18:28:40.855647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.859298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.859347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:29.112 [2024-07-22 18:28:40.859381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:21:29.112 [2024-07-22 18:28:40.859427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.859597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:29.112 [2024-07-22 18:28:40.860631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:29.112 [2024-07-22 18:28:40.860691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.860739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:29.112 [2024-07-22 18:28:40.860753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:21:29.112 [2024-07-22 18:28:40.860767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.861041] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 133f7ce8-c691-4f98-9963-df4bc40dc329 00:21:29.112 [2024-07-22 18:28:40.862938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.862980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:29.112 [2024-07-22 18:28:40.863000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:29.112 [2024-07-22 18:28:40.863013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.873115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.873165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:29.112 [2024-07-22 18:28:40.873186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.006 ms 00:21:29.112 [2024-07-22 18:28:40.873198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.873393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.873417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:29.112 [2024-07-22 18:28:40.873433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:29.112 [2024-07-22 18:28:40.873446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.873507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.873523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:29.112 [2024-07-22 18:28:40.873542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:29.112 [2024-07-22 18:28:40.873553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.873598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:29.112 [2024-07-22 18:28:40.879065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.879113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:29.112 [2024-07-22 18:28:40.879131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.478 ms 00:21:29.112 [2024-07-22 18:28:40.879145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.879226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.112 [2024-07-22 18:28:40.879249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:29.112 [2024-07-22 18:28:40.879263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:29.112 [2024-07-22 18:28:40.879277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.112 [2024-07-22 18:28:40.879314] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:29.112 [2024-07-22 18:28:40.879489] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:29.113 [2024-07-22 18:28:40.879510] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:29.113 [2024-07-22 18:28:40.879533] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:29.113 [2024-07-22 18:28:40.879549] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:29.113 [2024-07-22 18:28:40.879565] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:29.113 [2024-07-22 18:28:40.879579] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:29.113 [2024-07-22 18:28:40.879598] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:29.113 [2024-07-22 18:28:40.879610] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:29.113 [2024-07-22 18:28:40.879649] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:29.113 [2024-07-22 18:28:40.879662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.113 [2024-07-22 18:28:40.879695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:29.113 [2024-07-22 18:28:40.879713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:21:29.113 [2024-07-22 18:28:40.879728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.113 [2024-07-22 18:28:40.879855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.113 [2024-07-22 18:28:40.879874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:29.113 [2024-07-22 18:28:40.879887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:29.113 [2024-07-22 18:28:40.879901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.113 [2024-07-22 18:28:40.880047] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:29.113 [2024-07-22 18:28:40.880077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:29.113 [2024-07-22 18:28:40.880090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:29.113 [2024-07-22 18:28:40.880130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:29.113 [2024-07-22 18:28:40.880166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.113 [2024-07-22 18:28:40.880190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:29.113 [2024-07-22 18:28:40.880203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:29.113 [2024-07-22 18:28:40.880213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.113 [2024-07-22 18:28:40.880228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:29.113 [2024-07-22 18:28:40.880240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:29.113 [2024-07-22 18:28:40.880253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:29.113 [2024-07-22 18:28:40.880280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:29.113 [2024-07-22 18:28:40.880323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:29.113 [2024-07-22 18:28:40.880383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:29.113 [2024-07-22 18:28:40.880419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:29.113 [2024-07-22 18:28:40.880456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:29.113 [2024-07-22 18:28:40.880491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.113 [2024-07-22 18:28:40.880519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:29.113 [2024-07-22 18:28:40.880532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:29.113 [2024-07-22 18:28:40.880543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.113 [2024-07-22 18:28:40.880560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:29.113 [2024-07-22 18:28:40.880580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:29.113 [2024-07-22 18:28:40.880608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:29.113 [2024-07-22 18:28:40.880636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:29.113 [2024-07-22 18:28:40.880647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880660] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:29.113 [2024-07-22 18:28:40.880673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:29.113 [2024-07-22 18:28:40.880705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.113 [2024-07-22 18:28:40.880741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:29.113 [2024-07-22 18:28:40.880753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:29.113 [2024-07-22 18:28:40.880786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:29.113 [2024-07-22 18:28:40.880799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:29.113 [2024-07-22 18:28:40.880820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:29.113 [2024-07-22 18:28:40.880841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:29.113 [2024-07-22 18:28:40.880866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:29.113 [2024-07-22 18:28:40.880885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.113 [2024-07-22 18:28:40.880902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:29.113 [2024-07-22 18:28:40.880914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:29.113 [2024-07-22 18:28:40.880929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:29.113 [2024-07-22 18:28:40.880941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:29.113 [2024-07-22 18:28:40.880955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:29.113 [2024-07-22 18:28:40.880967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:29.113 [2024-07-22 18:28:40.880981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:29.113 [2024-07-22 18:28:40.880994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:29.113 [2024-07-22 18:28:40.881009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:29.113 [2024-07-22 18:28:40.881022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:29.113 [2024-07-22 18:28:40.881042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:29.113 [2024-07-22 18:28:40.881063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:29.113 [2024-07-22 18:28:40.881089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:29.113 [2024-07-22 18:28:40.881104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:29.113 [2024-07-22 18:28:40.881118] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:29.113 [2024-07-22 18:28:40.881131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.113 [2024-07-22 18:28:40.881147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:29.113 [2024-07-22 18:28:40.881160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:29.113 [2024-07-22 18:28:40.881174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:29.113 [2024-07-22 18:28:40.881186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:29.113 [2024-07-22 18:28:40.881202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.113 [2024-07-22 18:28:40.881214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:29.113 [2024-07-22 18:28:40.881231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:21:29.114 [2024-07-22 18:28:40.881243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.114 [2024-07-22 18:28:40.881362] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:29.114 [2024-07-22 18:28:40.881383] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:31.642 [2024-07-22 18:28:43.346642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.642 [2024-07-22 18:28:43.346735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:31.642 [2024-07-22 18:28:43.346763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2465.272 ms 00:21:31.642 [2024-07-22 18:28:43.346777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.642 [2024-07-22 18:28:43.386947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.642 [2024-07-22 18:28:43.387027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.642 [2024-07-22 18:28:43.387053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.739 ms 00:21:31.642 [2024-07-22 18:28:43.387068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.642 [2024-07-22 18:28:43.387269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.642 [2024-07-22 18:28:43.387290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:31.642 [2024-07-22 18:28:43.387307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:31.642 [2024-07-22 18:28:43.387323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.642 [2024-07-22 18:28:43.443597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.642 [2024-07-22 18:28:43.443694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.642 [2024-07-22 18:28:43.443727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.223 ms 00:21:31.642 [2024-07-22 18:28:43.443745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.642 [2024-07-22 18:28:43.443920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.642 [2024-07-22 18:28:43.443946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.642 [2024-07-22 18:28:43.443968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:31.642 [2024-07-22 18:28:43.443984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.444668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.444746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.643 [2024-07-22 18:28:43.444774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:21:31.643 [2024-07-22 18:28:43.444792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.445024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.445051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.643 [2024-07-22 18:28:43.445073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:21:31.643 [2024-07-22 18:28:43.445090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.468680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.468757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.643 [2024-07-22 18:28:43.468799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.530 ms 00:21:31.643 [2024-07-22 18:28:43.468813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.483536] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:31.643 [2024-07-22 18:28:43.505559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.505662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:31.643 [2024-07-22 18:28:43.505685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.570 ms 00:21:31.643 [2024-07-22 18:28:43.505717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.575044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.575127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:31.643 [2024-07-22 18:28:43.575150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.183 ms 00:21:31.643 [2024-07-22 18:28:43.575166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.575502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.575536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:31.643 [2024-07-22 18:28:43.575553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:21:31.643 [2024-07-22 18:28:43.575572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.606494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.606558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:31.643 [2024-07-22 18:28:43.606579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.876 ms 00:21:31.643 [2024-07-22 18:28:43.606595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.636728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.636801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:31.643 [2024-07-22 18:28:43.636823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.999 ms 00:21:31.643 [2024-07-22 18:28:43.636837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.643 [2024-07-22 18:28:43.637809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.643 [2024-07-22 18:28:43.637852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:31.643 [2024-07-22 18:28:43.637870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:21:31.643 [2024-07-22 18:28:43.637885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.901 [2024-07-22 18:28:43.725809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.901 [2024-07-22 18:28:43.725885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:31.901 [2024-07-22 18:28:43.725908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.875 ms 00:21:31.901 [2024-07-22 18:28:43.725928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.901 [2024-07-22 18:28:43.758223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.901 [2024-07-22 18:28:43.758276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:31.901 [2024-07-22 18:28:43.758297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.188 ms 00:21:31.901 [2024-07-22 18:28:43.758317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.901 [2024-07-22 18:28:43.788805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.901 [2024-07-22 18:28:43.788857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:31.901 [2024-07-22 18:28:43.788876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.386 ms 00:21:31.901 [2024-07-22 18:28:43.788890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.901 [2024-07-22 18:28:43.819769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.901 [2024-07-22 18:28:43.819819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:31.901 [2024-07-22 18:28:43.819837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.778 ms 00:21:31.901 [2024-07-22 18:28:43.819853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.901 [2024-07-22 18:28:43.819965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.901 [2024-07-22 18:28:43.819992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:31.901 [2024-07-22 18:28:43.820007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:31.901 [2024-07-22 18:28:43.820026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.901 [2024-07-22 18:28:43.820126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.901 [2024-07-22 18:28:43.820148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:31.901 [2024-07-22 18:28:43.820161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:31.901 [2024-07-22 18:28:43.820200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.901 [2024-07-22 18:28:43.821428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:31.901 [2024-07-22 18:28:43.825513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2965.497 ms, result 0 00:21:31.901 [2024-07-22 18:28:43.826463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.901 { 00:21:31.901 "name": "ftl0", 00:21:31.901 "uuid": "133f7ce8-c691-4f98-9963-df4bc40dc329" 00:21:31.901 } 00:21:31.901 18:28:43 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:31.901 18:28:43 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:21:31.901 18:28:43 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:31.901 18:28:43 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:21:31.901 18:28:43 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:31.901 18:28:43 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:31.901 18:28:43 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:32.159 18:28:44 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:32.418 [ 00:21:32.418 { 00:21:32.418 "name": "ftl0", 00:21:32.418 "aliases": [ 00:21:32.418 "133f7ce8-c691-4f98-9963-df4bc40dc329" 00:21:32.418 ], 00:21:32.418 "product_name": "FTL disk", 00:21:32.418 "block_size": 4096, 00:21:32.418 "num_blocks": 23592960, 00:21:32.418 "uuid": "133f7ce8-c691-4f98-9963-df4bc40dc329", 00:21:32.418 "assigned_rate_limits": { 00:21:32.418 "rw_ios_per_sec": 0, 00:21:32.418 "rw_mbytes_per_sec": 0, 00:21:32.418 "r_mbytes_per_sec": 0, 00:21:32.418 "w_mbytes_per_sec": 0 00:21:32.418 }, 00:21:32.418 "claimed": false, 00:21:32.418 "zoned": false, 00:21:32.418 "supported_io_types": { 00:21:32.418 "read": true, 00:21:32.418 "write": true, 00:21:32.418 "unmap": true, 00:21:32.418 "flush": true, 00:21:32.418 "reset": false, 00:21:32.418 "nvme_admin": false, 00:21:32.418 "nvme_io": false, 00:21:32.418 "nvme_io_md": false, 00:21:32.418 "write_zeroes": true, 00:21:32.418 "zcopy": false, 00:21:32.418 "get_zone_info": false, 00:21:32.418 "zone_management": false, 00:21:32.418 "zone_append": false, 00:21:32.418 "compare": false, 00:21:32.418 "compare_and_write": false, 00:21:32.418 "abort": false, 00:21:32.418 "seek_hole": false, 00:21:32.418 "seek_data": false, 00:21:32.418 "copy": false, 00:21:32.418 "nvme_iov_md": false 00:21:32.418 }, 00:21:32.418 "driver_specific": { 00:21:32.418 "ftl": { 00:21:32.418 "base_bdev": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:32.418 "cache": "nvc0n1p0" 00:21:32.418 } 00:21:32.418 } 00:21:32.418 } 00:21:32.418 ] 00:21:32.418 18:28:44 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:21:32.418 18:28:44 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:32.418 18:28:44 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:32.676 18:28:44 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:32.676 18:28:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:32.934 18:28:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:32.934 { 00:21:32.934 "name": "ftl0", 00:21:32.934 "aliases": [ 00:21:32.934 "133f7ce8-c691-4f98-9963-df4bc40dc329" 00:21:32.934 ], 00:21:32.934 "product_name": "FTL disk", 00:21:32.935 "block_size": 4096, 00:21:32.935 "num_blocks": 23592960, 00:21:32.935 "uuid": "133f7ce8-c691-4f98-9963-df4bc40dc329", 00:21:32.935 "assigned_rate_limits": { 00:21:32.935 "rw_ios_per_sec": 0, 00:21:32.935 "rw_mbytes_per_sec": 0, 00:21:32.935 "r_mbytes_per_sec": 0, 00:21:32.935 "w_mbytes_per_sec": 0 00:21:32.935 }, 00:21:32.935 "claimed": false, 00:21:32.935 "zoned": false, 00:21:32.935 "supported_io_types": { 00:21:32.935 "read": true, 00:21:32.935 "write": true, 00:21:32.935 "unmap": true, 00:21:32.935 "flush": true, 00:21:32.935 "reset": false, 00:21:32.935 "nvme_admin": false, 00:21:32.935 "nvme_io": false, 00:21:32.935 "nvme_io_md": false, 00:21:32.935 "write_zeroes": true, 00:21:32.935 "zcopy": false, 00:21:32.935 "get_zone_info": false, 00:21:32.935 "zone_management": false, 00:21:32.935 "zone_append": false, 00:21:32.935 "compare": false, 00:21:32.935 "compare_and_write": false, 00:21:32.935 "abort": false, 00:21:32.935 "seek_hole": false, 00:21:32.935 "seek_data": false, 00:21:32.935 "copy": false, 00:21:32.935 "nvme_iov_md": false 00:21:32.935 }, 00:21:32.935 "driver_specific": { 00:21:32.935 "ftl": { 00:21:32.935 "base_bdev": "3e285bb3-eb22-4913-8cb3-b02efe56d2c9", 00:21:32.935 "cache": "nvc0n1p0" 00:21:32.935 } 00:21:32.935 } 00:21:32.935 } 00:21:32.935 ]' 00:21:32.935 18:28:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:32.935 18:28:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:32.935 18:28:44 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:33.193 [2024-07-22 18:28:45.105786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.193 [2024-07-22 18:28:45.105850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:33.193 [2024-07-22 18:28:45.105881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:33.193 [2024-07-22 18:28:45.105895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.105948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:33.194 [2024-07-22 18:28:45.109610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.109656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:33.194 [2024-07-22 18:28:45.109673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:21:33.194 [2024-07-22 18:28:45.109708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.110300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.110344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:33.194 [2024-07-22 18:28:45.110362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:21:33.194 [2024-07-22 18:28:45.110376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.114009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.114051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:33.194 [2024-07-22 18:28:45.114067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:21:33.194 [2024-07-22 18:28:45.114081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.121526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.121569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:33.194 [2024-07-22 18:28:45.121603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.387 ms 00:21:33.194 [2024-07-22 18:28:45.121618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.153410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.153463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:33.194 [2024-07-22 18:28:45.153483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.663 ms 00:21:33.194 [2024-07-22 18:28:45.153501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.172215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.172266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:33.194 [2024-07-22 18:28:45.172289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.616 ms 00:21:33.194 [2024-07-22 18:28:45.172305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.172556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.172583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:33.194 [2024-07-22 18:28:45.172597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:21:33.194 [2024-07-22 18:28:45.172612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.194 [2024-07-22 18:28:45.203521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.194 [2024-07-22 18:28:45.203571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:33.194 [2024-07-22 18:28:45.203590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.872 ms 00:21:33.194 [2024-07-22 18:28:45.203604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.454 [2024-07-22 18:28:45.234126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.454 [2024-07-22 18:28:45.234177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:33.454 [2024-07-22 18:28:45.234195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.404 ms 00:21:33.454 [2024-07-22 18:28:45.234212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.454 [2024-07-22 18:28:45.264437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.454 [2024-07-22 18:28:45.264503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:33.454 [2024-07-22 18:28:45.264538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.125 ms 00:21:33.454 [2024-07-22 18:28:45.264552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.454 [2024-07-22 18:28:45.296365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.454 [2024-07-22 18:28:45.296416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:33.454 [2024-07-22 18:28:45.296434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.637 ms 00:21:33.454 [2024-07-22 18:28:45.296449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.454 [2024-07-22 18:28:45.296550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:33.454 [2024-07-22 18:28:45.296583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.296994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:33.454 [2024-07-22 18:28:45.297971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.297984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.297999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:33.455 [2024-07-22 18:28:45.298127] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:33.455 [2024-07-22 18:28:45.298141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133f7ce8-c691-4f98-9963-df4bc40dc329 00:21:33.455 [2024-07-22 18:28:45.298159] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:33.455 [2024-07-22 18:28:45.298171] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:33.455 [2024-07-22 18:28:45.298189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:33.455 [2024-07-22 18:28:45.298201] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:33.455 [2024-07-22 18:28:45.298215] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:33.455 [2024-07-22 18:28:45.298228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:33.455 [2024-07-22 18:28:45.298242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:33.455 [2024-07-22 18:28:45.298253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:33.455 [2024-07-22 18:28:45.298267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:33.455 [2024-07-22 18:28:45.298279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.455 [2024-07-22 18:28:45.298294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:33.455 [2024-07-22 18:28:45.298308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:21:33.455 [2024-07-22 18:28:45.298323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.455 [2024-07-22 18:28:45.315627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.455 [2024-07-22 18:28:45.315675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:33.455 [2024-07-22 18:28:45.315712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.261 ms 00:21:33.455 [2024-07-22 18:28:45.315732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.455 [2024-07-22 18:28:45.316283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.455 [2024-07-22 18:28:45.316329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:33.455 [2024-07-22 18:28:45.316346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:21:33.455 [2024-07-22 18:28:45.316361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.455 [2024-07-22 18:28:45.376340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.455 [2024-07-22 18:28:45.376412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.455 [2024-07-22 18:28:45.376431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.455 [2024-07-22 18:28:45.376446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.455 [2024-07-22 18:28:45.376596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.455 [2024-07-22 18:28:45.376621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.455 [2024-07-22 18:28:45.376635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.455 [2024-07-22 18:28:45.376650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.455 [2024-07-22 18:28:45.376763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.455 [2024-07-22 18:28:45.376805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.455 [2024-07-22 18:28:45.376819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.455 [2024-07-22 18:28:45.376837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.455 [2024-07-22 18:28:45.376880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.455 [2024-07-22 18:28:45.376899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.455 [2024-07-22 18:28:45.376911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.455 [2024-07-22 18:28:45.376926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.488116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.488192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.714 [2024-07-22 18:28:45.488213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.488228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.574984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.575058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.714 [2024-07-22 18:28:45.575079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.575095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.575225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.575251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.714 [2024-07-22 18:28:45.575269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.575287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.575351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.575370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.714 [2024-07-22 18:28:45.575382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.575409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.575555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.575581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.714 [2024-07-22 18:28:45.575614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.575635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.575729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.575768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:33.714 [2024-07-22 18:28:45.575782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.575797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.575859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.575879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.714 [2024-07-22 18:28:45.575892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.575914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.575985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.714 [2024-07-22 18:28:45.576006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.714 [2024-07-22 18:28:45.576020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.714 [2024-07-22 18:28:45.576033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.714 [2024-07-22 18:28:45.576295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.491 ms, result 0 00:21:33.714 true 00:21:33.714 18:28:45 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80662 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80662 ']' 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80662 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80662 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80662' 00:21:33.714 killing process with pid 80662 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80662 00:21:33.714 18:28:45 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80662 00:21:38.985 18:28:50 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:39.553 65536+0 records in 00:21:39.553 65536+0 records out 00:21:39.553 268435456 bytes (268 MB, 256 MiB) copied, 1.24963 s, 215 MB/s 00:21:39.553 18:28:51 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:39.811 [2024-07-22 18:28:51.668319] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:39.811 [2024-07-22 18:28:51.668717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80858 ] 00:21:40.070 [2024-07-22 18:28:51.857526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.329 [2024-07-22 18:28:52.113095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.588 [2024-07-22 18:28:52.463934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:40.588 [2024-07-22 18:28:52.464031] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:40.848 [2024-07-22 18:28:52.629408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.629479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:40.848 [2024-07-22 18:28:52.629517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:40.848 [2024-07-22 18:28:52.629530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.633026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.633067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.848 [2024-07-22 18:28:52.633101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.467 ms 00:21:40.848 [2024-07-22 18:28:52.633113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.633271] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:40.848 [2024-07-22 18:28:52.634299] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:40.848 [2024-07-22 18:28:52.634340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.634356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.848 [2024-07-22 18:28:52.634369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:21:40.848 [2024-07-22 18:28:52.634380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.636449] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:40.848 [2024-07-22 18:28:52.653140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.653184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:40.848 [2024-07-22 18:28:52.653224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.693 ms 00:21:40.848 [2024-07-22 18:28:52.653237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.653359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.653380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:40.848 [2024-07-22 18:28:52.653393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:40.848 [2024-07-22 18:28:52.653405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.662320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.662366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.848 [2024-07-22 18:28:52.662399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.855 ms 00:21:40.848 [2024-07-22 18:28:52.662410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.662535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.662556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.848 [2024-07-22 18:28:52.662569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:40.848 [2024-07-22 18:28:52.662580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.662629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.662645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:40.848 [2024-07-22 18:28:52.662662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:40.848 [2024-07-22 18:28:52.662673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.662744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:40.848 [2024-07-22 18:28:52.667848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.667884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.848 [2024-07-22 18:28:52.667915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:21:40.848 [2024-07-22 18:28:52.667947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.668035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.668053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:40.848 [2024-07-22 18:28:52.668066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:40.848 [2024-07-22 18:28:52.668077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.668126] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:40.848 [2024-07-22 18:28:52.668167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:40.848 [2024-07-22 18:28:52.668216] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:40.848 [2024-07-22 18:28:52.668238] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:40.848 [2024-07-22 18:28:52.668343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:40.848 [2024-07-22 18:28:52.668360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:40.848 [2024-07-22 18:28:52.668376] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:40.848 [2024-07-22 18:28:52.668391] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:40.848 [2024-07-22 18:28:52.668405] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:40.848 [2024-07-22 18:28:52.668418] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:40.848 [2024-07-22 18:28:52.668434] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:40.848 [2024-07-22 18:28:52.668445] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:40.848 [2024-07-22 18:28:52.668457] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:40.848 [2024-07-22 18:28:52.668470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.668481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:40.848 [2024-07-22 18:28:52.668494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:21:40.848 [2024-07-22 18:28:52.668505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.668600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.848 [2024-07-22 18:28:52.668615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:40.848 [2024-07-22 18:28:52.668627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:40.848 [2024-07-22 18:28:52.668644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.848 [2024-07-22 18:28:52.668775] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:40.848 [2024-07-22 18:28:52.668796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:40.848 [2024-07-22 18:28:52.668809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:40.848 [2024-07-22 18:28:52.668821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.848 [2024-07-22 18:28:52.668833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:40.848 [2024-07-22 18:28:52.668847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:40.848 [2024-07-22 18:28:52.668857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:40.848 [2024-07-22 18:28:52.668870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:40.848 [2024-07-22 18:28:52.668880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:40.848 [2024-07-22 18:28:52.668891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:40.848 [2024-07-22 18:28:52.668902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:40.848 [2024-07-22 18:28:52.668913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:40.848 [2024-07-22 18:28:52.668923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:40.848 [2024-07-22 18:28:52.668937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:40.848 [2024-07-22 18:28:52.668948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:40.848 [2024-07-22 18:28:52.668959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.848 [2024-07-22 18:28:52.668971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:40.848 [2024-07-22 18:28:52.668983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:40.848 [2024-07-22 18:28:52.669008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.848 [2024-07-22 18:28:52.669020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:40.849 [2024-07-22 18:28:52.669031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.849 [2024-07-22 18:28:52.669052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:40.849 [2024-07-22 18:28:52.669063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.849 [2024-07-22 18:28:52.669085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:40.849 [2024-07-22 18:28:52.669096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.849 [2024-07-22 18:28:52.669117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:40.849 [2024-07-22 18:28:52.669127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.849 [2024-07-22 18:28:52.669149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:40.849 [2024-07-22 18:28:52.669159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:40.849 [2024-07-22 18:28:52.669181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:40.849 [2024-07-22 18:28:52.669192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:40.849 [2024-07-22 18:28:52.669202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:40.849 [2024-07-22 18:28:52.669213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:40.849 [2024-07-22 18:28:52.669224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:40.849 [2024-07-22 18:28:52.669235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:40.849 [2024-07-22 18:28:52.669256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:40.849 [2024-07-22 18:28:52.669266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669277] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:40.849 [2024-07-22 18:28:52.669297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:40.849 [2024-07-22 18:28:52.669316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:40.849 [2024-07-22 18:28:52.669327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.849 [2024-07-22 18:28:52.669339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:40.849 [2024-07-22 18:28:52.669352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:40.849 [2024-07-22 18:28:52.669363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:40.849 [2024-07-22 18:28:52.669374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:40.849 [2024-07-22 18:28:52.669385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:40.849 [2024-07-22 18:28:52.669397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:40.849 [2024-07-22 18:28:52.669409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:40.849 [2024-07-22 18:28:52.669429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:40.849 [2024-07-22 18:28:52.669442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:40.849 [2024-07-22 18:28:52.669454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:40.849 [2024-07-22 18:28:52.669466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:40.849 [2024-07-22 18:28:52.669478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:40.849 [2024-07-22 18:28:52.669489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:40.849 [2024-07-22 18:28:52.669500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:40.849 [2024-07-22 18:28:52.669512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:40.849 [2024-07-22 18:28:52.669523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:40.849 [2024-07-22 18:28:52.669535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:40.849 [2024-07-22 18:28:52.669547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:40.849 [2024-07-22 18:28:52.669559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:40.849 [2024-07-22 18:28:52.669570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:40.849 [2024-07-22 18:28:52.669582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:40.849 [2024-07-22 18:28:52.669596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:40.849 [2024-07-22 18:28:52.669608] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:40.849 [2024-07-22 18:28:52.669623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:40.849 [2024-07-22 18:28:52.669635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:40.849 [2024-07-22 18:28:52.669648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:40.849 [2024-07-22 18:28:52.669660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:40.849 [2024-07-22 18:28:52.669672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:40.849 [2024-07-22 18:28:52.669698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.669711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:40.849 [2024-07-22 18:28:52.669724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:21:40.849 [2024-07-22 18:28:52.669735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.715140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.715212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.849 [2024-07-22 18:28:52.715233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.323 ms 00:21:40.849 [2024-07-22 18:28:52.715252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.715503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.715525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:40.849 [2024-07-22 18:28:52.715545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:40.849 [2024-07-22 18:28:52.715558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.759497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.759563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.849 [2024-07-22 18:28:52.759584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.901 ms 00:21:40.849 [2024-07-22 18:28:52.759597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.759771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.759791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.849 [2024-07-22 18:28:52.759805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:40.849 [2024-07-22 18:28:52.759818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.760388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.760431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.849 [2024-07-22 18:28:52.760450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:21:40.849 [2024-07-22 18:28:52.760462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.760636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.760664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.849 [2024-07-22 18:28:52.760695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:21:40.849 [2024-07-22 18:28:52.760710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.779992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.780055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.849 [2024-07-22 18:28:52.780075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.246 ms 00:21:40.849 [2024-07-22 18:28:52.780087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.797404] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:40.849 [2024-07-22 18:28:52.797462] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:40.849 [2024-07-22 18:28:52.797483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.797496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:40.849 [2024-07-22 18:28:52.797509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.179 ms 00:21:40.849 [2024-07-22 18:28:52.797521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.849 [2024-07-22 18:28:52.828596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.849 [2024-07-22 18:28:52.828716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:40.849 [2024-07-22 18:28:52.828740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.964 ms 00:21:40.850 [2024-07-22 18:28:52.828753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.850 [2024-07-22 18:28:52.846523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.850 [2024-07-22 18:28:52.846590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:40.850 [2024-07-22 18:28:52.846626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.570 ms 00:21:40.850 [2024-07-22 18:28:52.846638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.862148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.862190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:41.109 [2024-07-22 18:28:52.862207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.356 ms 00:21:41.109 [2024-07-22 18:28:52.862219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.863189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.863223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:41.109 [2024-07-22 18:28:52.863245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:21:41.109 [2024-07-22 18:28:52.863258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.940503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.940601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:41.109 [2024-07-22 18:28:52.940633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.200 ms 00:21:41.109 [2024-07-22 18:28:52.940645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.957137] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:41.109 [2024-07-22 18:28:52.979237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.979312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:41.109 [2024-07-22 18:28:52.979350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.407 ms 00:21:41.109 [2024-07-22 18:28:52.979363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.979527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.979548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:41.109 [2024-07-22 18:28:52.979562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:41.109 [2024-07-22 18:28:52.979579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.979656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.979673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:41.109 [2024-07-22 18:28:52.979708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:41.109 [2024-07-22 18:28:52.979721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.979759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.979775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:41.109 [2024-07-22 18:28:52.979789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:41.109 [2024-07-22 18:28:52.979801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:52.979862] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:41.109 [2024-07-22 18:28:52.979879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:52.979890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:41.109 [2024-07-22 18:28:52.979903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:41.109 [2024-07-22 18:28:52.979915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:53.011352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:53.011417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:41.109 [2024-07-22 18:28:53.011436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.408 ms 00:21:41.109 [2024-07-22 18:28:53.011461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:53.011590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.109 [2024-07-22 18:28:53.011611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:41.109 [2024-07-22 18:28:53.011625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:41.109 [2024-07-22 18:28:53.011637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.109 [2024-07-22 18:28:53.012977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.109 [2024-07-22 18:28:53.016965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.207 ms, result 0 00:21:41.109 [2024-07-22 18:28:53.017851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.109 [2024-07-22 18:28:53.033826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:52.386  Copying: 24/256 [MB] (24 MBps) Copying: 49/256 [MB] (24 MBps) Copying: 71/256 [MB] (21 MBps) Copying: 92/256 [MB] (21 MBps) Copying: 113/256 [MB] (20 MBps) Copying: 134/256 [MB] (21 MBps) Copying: 156/256 [MB] (21 MBps) Copying: 178/256 [MB] (21 MBps) Copying: 201/256 [MB] (22 MBps) Copying: 226/256 [MB] (25 MBps) Copying: 252/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-22 18:29:04.178621] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.386 [2024-07-22 18:29:04.191051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.386 [2024-07-22 18:29:04.191095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:52.386 [2024-07-22 18:29:04.191116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:52.386 [2024-07-22 18:29:04.191129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.386 [2024-07-22 18:29:04.191161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:52.386 [2024-07-22 18:29:04.194728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.386 [2024-07-22 18:29:04.194759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:52.386 [2024-07-22 18:29:04.194782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:21:52.386 [2024-07-22 18:29:04.194794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.386 [2024-07-22 18:29:04.196674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.386 [2024-07-22 18:29:04.196725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:52.386 [2024-07-22 18:29:04.196742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.848 ms 00:21:52.386 [2024-07-22 18:29:04.196754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.386 [2024-07-22 18:29:04.203908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.386 [2024-07-22 18:29:04.203947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:52.386 [2024-07-22 18:29:04.203963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.128 ms 00:21:52.386 [2024-07-22 18:29:04.203974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.386 [2024-07-22 18:29:04.211267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.386 [2024-07-22 18:29:04.211302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:52.386 [2024-07-22 18:29:04.211318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.220 ms 00:21:52.386 [2024-07-22 18:29:04.211329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.386 [2024-07-22 18:29:04.241470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.386 [2024-07-22 18:29:04.241516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:52.386 [2024-07-22 18:29:04.241535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.084 ms 00:21:52.386 [2024-07-22 18:29:04.241547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.386 [2024-07-22 18:29:04.259123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.386 [2024-07-22 18:29:04.259166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:52.387 [2024-07-22 18:29:04.259183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.504 ms 00:21:52.387 [2024-07-22 18:29:04.259195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.387 [2024-07-22 18:29:04.259354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.387 [2024-07-22 18:29:04.259377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:52.387 [2024-07-22 18:29:04.259403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:52.387 [2024-07-22 18:29:04.259416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.387 [2024-07-22 18:29:04.289710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.387 [2024-07-22 18:29:04.289753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:52.387 [2024-07-22 18:29:04.289771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.268 ms 00:21:52.387 [2024-07-22 18:29:04.289782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.387 [2024-07-22 18:29:04.319662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.387 [2024-07-22 18:29:04.319712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:52.387 [2024-07-22 18:29:04.319730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.812 ms 00:21:52.387 [2024-07-22 18:29:04.319741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.387 [2024-07-22 18:29:04.349421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.387 [2024-07-22 18:29:04.349461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:52.387 [2024-07-22 18:29:04.349493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.611 ms 00:21:52.387 [2024-07-22 18:29:04.349504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.387 [2024-07-22 18:29:04.379013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.387 [2024-07-22 18:29:04.379070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:52.387 [2024-07-22 18:29:04.379089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.407 ms 00:21:52.387 [2024-07-22 18:29:04.379101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.387 [2024-07-22 18:29:04.379168] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:52.387 [2024-07-22 18:29:04.379194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.379995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:52.387 [2024-07-22 18:29:04.380184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:52.388 [2024-07-22 18:29:04.380586] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:52.388 [2024-07-22 18:29:04.380605] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133f7ce8-c691-4f98-9963-df4bc40dc329 00:21:52.388 [2024-07-22 18:29:04.380618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:52.388 [2024-07-22 18:29:04.380630] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:52.388 [2024-07-22 18:29:04.380641] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:52.388 [2024-07-22 18:29:04.380665] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:52.388 [2024-07-22 18:29:04.380687] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:52.388 [2024-07-22 18:29:04.380702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:52.388 [2024-07-22 18:29:04.380714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:52.388 [2024-07-22 18:29:04.380724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:52.388 [2024-07-22 18:29:04.380734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:52.388 [2024-07-22 18:29:04.380746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.388 [2024-07-22 18:29:04.380758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:52.388 [2024-07-22 18:29:04.380771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:21:52.388 [2024-07-22 18:29:04.380783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.388 [2024-07-22 18:29:04.398623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.388 [2024-07-22 18:29:04.398677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:52.388 [2024-07-22 18:29:04.398720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.801 ms 00:21:52.388 [2024-07-22 18:29:04.398732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.388 [2024-07-22 18:29:04.399256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.388 [2024-07-22 18:29:04.399290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:52.388 [2024-07-22 18:29:04.399306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:21:52.388 [2024-07-22 18:29:04.399325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.440282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.440352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.647 [2024-07-22 18:29:04.440371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.440384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.440524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.440542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.647 [2024-07-22 18:29:04.440555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.440575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.440644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.440662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.647 [2024-07-22 18:29:04.440676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.440710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.440738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.440753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.647 [2024-07-22 18:29:04.440765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.440777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.544659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.544735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.647 [2024-07-22 18:29:04.544755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.544768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.629666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.629768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.647 [2024-07-22 18:29:04.629789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.629809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.629900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.629919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:52.647 [2024-07-22 18:29:04.629931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.629943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.629982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.629997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:52.647 [2024-07-22 18:29:04.630009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.630021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.630151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.630171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:52.647 [2024-07-22 18:29:04.630184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.630195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.630247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.630265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:52.647 [2024-07-22 18:29:04.630278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.630289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.630340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.630362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:52.647 [2024-07-22 18:29:04.630374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.630386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.630443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.647 [2024-07-22 18:29:04.630460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:52.647 [2024-07-22 18:29:04.630473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.647 [2024-07-22 18:29:04.630484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.647 [2024-07-22 18:29:04.630664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.597 ms, result 0 00:21:54.021 00:21:54.021 00:21:54.021 18:29:05 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81005 00:21:54.021 18:29:05 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:54.021 18:29:05 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81005 00:21:54.021 18:29:05 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81005 ']' 00:21:54.021 18:29:05 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.021 18:29:05 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.021 18:29:05 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.021 18:29:05 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.021 18:29:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:54.280 [2024-07-22 18:29:06.036988] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:54.280 [2024-07-22 18:29:06.037170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81005 ] 00:21:54.280 [2024-07-22 18:29:06.209787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.539 [2024-07-22 18:29:06.447378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.507 18:29:07 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.507 18:29:07 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:55.507 18:29:07 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:55.507 [2024-07-22 18:29:07.468715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.507 [2024-07-22 18:29:07.468796] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.768 [2024-07-22 18:29:07.659557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.659704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:55.768 [2024-07-22 18:29:07.659742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:55.768 [2024-07-22 18:29:07.659801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.663736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.663784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.768 [2024-07-22 18:29:07.663802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.889 ms 00:21:55.768 [2024-07-22 18:29:07.663817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.663944] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:55.768 [2024-07-22 18:29:07.664881] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:55.768 [2024-07-22 18:29:07.664921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.664940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.768 [2024-07-22 18:29:07.664953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:21:55.768 [2024-07-22 18:29:07.664967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.666938] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:55.768 [2024-07-22 18:29:07.683453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.683502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:55.768 [2024-07-22 18:29:07.683527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.510 ms 00:21:55.768 [2024-07-22 18:29:07.683551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.683674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.683722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:55.768 [2024-07-22 18:29:07.683741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:55.768 [2024-07-22 18:29:07.683753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.692155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.692202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.768 [2024-07-22 18:29:07.692233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.326 ms 00:21:55.768 [2024-07-22 18:29:07.692247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.692430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.692452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.768 [2024-07-22 18:29:07.692473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:21:55.768 [2024-07-22 18:29:07.692486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.692546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.692563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:55.768 [2024-07-22 18:29:07.692582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:55.768 [2024-07-22 18:29:07.692595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.692639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:55.768 [2024-07-22 18:29:07.697532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.697576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.768 [2024-07-22 18:29:07.697593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:21:55.768 [2024-07-22 18:29:07.697611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.697723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.697758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:55.768 [2024-07-22 18:29:07.697786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:55.768 [2024-07-22 18:29:07.697811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.697844] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:55.768 [2024-07-22 18:29:07.697884] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:55.768 [2024-07-22 18:29:07.697940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:55.768 [2024-07-22 18:29:07.697972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:55.768 [2024-07-22 18:29:07.698078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:55.768 [2024-07-22 18:29:07.698111] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:55.768 [2024-07-22 18:29:07.698133] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:55.768 [2024-07-22 18:29:07.698154] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:55.768 [2024-07-22 18:29:07.698169] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:55.768 [2024-07-22 18:29:07.698187] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:55.768 [2024-07-22 18:29:07.698199] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:55.768 [2024-07-22 18:29:07.698233] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:55.768 [2024-07-22 18:29:07.698246] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:55.768 [2024-07-22 18:29:07.698268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.698280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:55.768 [2024-07-22 18:29:07.698298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:21:55.768 [2024-07-22 18:29:07.698310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.698420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.768 [2024-07-22 18:29:07.698436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:55.768 [2024-07-22 18:29:07.698454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:55.768 [2024-07-22 18:29:07.698466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.768 [2024-07-22 18:29:07.698597] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:55.768 [2024-07-22 18:29:07.698617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:55.768 [2024-07-22 18:29:07.698636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.768 [2024-07-22 18:29:07.698650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.768 [2024-07-22 18:29:07.698667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:55.768 [2024-07-22 18:29:07.698693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:55.768 [2024-07-22 18:29:07.698715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:55.768 [2024-07-22 18:29:07.698728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:55.768 [2024-07-22 18:29:07.698749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:55.768 [2024-07-22 18:29:07.698761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.768 [2024-07-22 18:29:07.698777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:55.768 [2024-07-22 18:29:07.698790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:55.768 [2024-07-22 18:29:07.698805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.768 [2024-07-22 18:29:07.698817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:55.768 [2024-07-22 18:29:07.698833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:55.768 [2024-07-22 18:29:07.698844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.768 [2024-07-22 18:29:07.698860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:55.768 [2024-07-22 18:29:07.698872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:55.768 [2024-07-22 18:29:07.698888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.768 [2024-07-22 18:29:07.698901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:55.768 [2024-07-22 18:29:07.698918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:55.768 [2024-07-22 18:29:07.698929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.769 [2024-07-22 18:29:07.698948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:55.769 [2024-07-22 18:29:07.698960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:55.769 [2024-07-22 18:29:07.698980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.769 [2024-07-22 18:29:07.698992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:55.769 [2024-07-22 18:29:07.699008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:55.769 [2024-07-22 18:29:07.699034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.769 [2024-07-22 18:29:07.699052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:55.769 [2024-07-22 18:29:07.699064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:55.769 [2024-07-22 18:29:07.699081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.769 [2024-07-22 18:29:07.699093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:55.769 [2024-07-22 18:29:07.699110] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:55.769 [2024-07-22 18:29:07.699121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.769 [2024-07-22 18:29:07.699138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:55.769 [2024-07-22 18:29:07.699150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:55.769 [2024-07-22 18:29:07.699166] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.769 [2024-07-22 18:29:07.699178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:55.769 [2024-07-22 18:29:07.699195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:55.769 [2024-07-22 18:29:07.699207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.769 [2024-07-22 18:29:07.699228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:55.769 [2024-07-22 18:29:07.699239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:55.769 [2024-07-22 18:29:07.699255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.769 [2024-07-22 18:29:07.699267] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:55.769 [2024-07-22 18:29:07.699292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:55.769 [2024-07-22 18:29:07.699304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.769 [2024-07-22 18:29:07.699321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.769 [2024-07-22 18:29:07.699334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:55.769 [2024-07-22 18:29:07.699351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:55.769 [2024-07-22 18:29:07.699362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:55.769 [2024-07-22 18:29:07.699379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:55.769 [2024-07-22 18:29:07.699403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:55.769 [2024-07-22 18:29:07.699422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:55.769 [2024-07-22 18:29:07.699436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:55.769 [2024-07-22 18:29:07.699458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.769 [2024-07-22 18:29:07.699477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:55.769 [2024-07-22 18:29:07.699501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:55.769 [2024-07-22 18:29:07.699514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:55.769 [2024-07-22 18:29:07.699531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:55.769 [2024-07-22 18:29:07.699544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:55.769 [2024-07-22 18:29:07.699562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:55.769 [2024-07-22 18:29:07.699574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:55.769 [2024-07-22 18:29:07.699592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:55.769 [2024-07-22 18:29:07.699604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:55.769 [2024-07-22 18:29:07.699621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:55.769 [2024-07-22 18:29:07.699634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:55.769 [2024-07-22 18:29:07.699651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:55.769 [2024-07-22 18:29:07.699664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:55.769 [2024-07-22 18:29:07.699693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:55.769 [2024-07-22 18:29:07.699708] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:55.769 [2024-07-22 18:29:07.699727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.769 [2024-07-22 18:29:07.699741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:55.769 [2024-07-22 18:29:07.699762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:55.769 [2024-07-22 18:29:07.699776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:55.769 [2024-07-22 18:29:07.699793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:55.769 [2024-07-22 18:29:07.699807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.769 [2024-07-22 18:29:07.699824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:55.769 [2024-07-22 18:29:07.699838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:21:55.769 [2024-07-22 18:29:07.699855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.769 [2024-07-22 18:29:07.740999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.769 [2024-07-22 18:29:07.741065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.769 [2024-07-22 18:29:07.741088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.048 ms 00:21:55.769 [2024-07-22 18:29:07.741114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.769 [2024-07-22 18:29:07.741300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.769 [2024-07-22 18:29:07.741329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:55.769 [2024-07-22 18:29:07.741346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:55.769 [2024-07-22 18:29:07.741364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.786590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.786662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.028 [2024-07-22 18:29:07.786709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.193 ms 00:21:56.028 [2024-07-22 18:29:07.786732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.786852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.786881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.028 [2024-07-22 18:29:07.786897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:56.028 [2024-07-22 18:29:07.786915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.787485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.787527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.028 [2024-07-22 18:29:07.787552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:21:56.028 [2024-07-22 18:29:07.787569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.787771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.787799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.028 [2024-07-22 18:29:07.787814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:21:56.028 [2024-07-22 18:29:07.787832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.809786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.809868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.028 [2024-07-22 18:29:07.809889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.920 ms 00:21:56.028 [2024-07-22 18:29:07.809907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.826915] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:56.028 [2024-07-22 18:29:07.826965] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:56.028 [2024-07-22 18:29:07.826985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.827004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:56.028 [2024-07-22 18:29:07.827018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.922 ms 00:21:56.028 [2024-07-22 18:29:07.827035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.855945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.855999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:56.028 [2024-07-22 18:29:07.856018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.820 ms 00:21:56.028 [2024-07-22 18:29:07.856037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.871131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.871181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:56.028 [2024-07-22 18:29:07.871213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.001 ms 00:21:56.028 [2024-07-22 18:29:07.871236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.886183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.886232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:56.028 [2024-07-22 18:29:07.886250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.860 ms 00:21:56.028 [2024-07-22 18:29:07.886267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.887148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.887191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:56.028 [2024-07-22 18:29:07.887207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:21:56.028 [2024-07-22 18:29:07.887224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.972018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:07.972095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:56.028 [2024-07-22 18:29:07.972116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.758 ms 00:21:56.028 [2024-07-22 18:29:07.972132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:07.984545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:56.028 [2024-07-22 18:29:08.005047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:08.005108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:56.028 [2024-07-22 18:29:08.005135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.764 ms 00:21:56.028 [2024-07-22 18:29:08.005151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:08.005286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:08.005306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:56.028 [2024-07-22 18:29:08.005323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:56.028 [2024-07-22 18:29:08.005335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:08.005414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:08.005431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:56.028 [2024-07-22 18:29:08.005446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:56.028 [2024-07-22 18:29:08.005458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:08.005500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:08.005516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:56.028 [2024-07-22 18:29:08.005540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:56.028 [2024-07-22 18:29:08.005552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:08.005596] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:56.028 [2024-07-22 18:29:08.005612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:08.005628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:56.028 [2024-07-22 18:29:08.005642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:56.028 [2024-07-22 18:29:08.005656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:08.036864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:08.036915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:56.028 [2024-07-22 18:29:08.036934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.153 ms 00:21:56.028 [2024-07-22 18:29:08.036948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:08.037087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.028 [2024-07-22 18:29:08.037113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:56.028 [2024-07-22 18:29:08.037128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:56.028 [2024-07-22 18:29:08.037150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.028 [2024-07-22 18:29:08.038302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:56.286 [2024-07-22 18:29:08.042295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.471 ms, result 0 00:21:56.286 [2024-07-22 18:29:08.043430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:56.286 Some configs were skipped because the RPC state that can call them passed over. 00:21:56.286 18:29:08 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:56.544 [2024-07-22 18:29:08.332886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.544 [2024-07-22 18:29:08.333135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:56.544 [2024-07-22 18:29:08.333271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:21:56.544 [2024-07-22 18:29:08.333324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.544 [2024-07-22 18:29:08.333497] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.104 ms, result 0 00:21:56.544 true 00:21:56.544 18:29:08 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:56.801 [2024-07-22 18:29:08.616890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.801 [2024-07-22 18:29:08.617155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:56.801 [2024-07-22 18:29:08.617283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:21:56.801 [2024-07-22 18:29:08.617339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.801 [2024-07-22 18:29:08.617499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.804 ms, result 0 00:21:56.801 true 00:21:56.801 18:29:08 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81005 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81005 ']' 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81005 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81005 00:21:56.801 killing process with pid 81005 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81005' 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81005 00:21:56.801 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81005 00:21:57.734 [2024-07-22 18:29:09.673274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.673368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:57.734 [2024-07-22 18:29:09.673393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:57.734 [2024-07-22 18:29:09.673406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.673442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:57.734 [2024-07-22 18:29:09.677140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.677181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:57.734 [2024-07-22 18:29:09.677197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:21:57.734 [2024-07-22 18:29:09.677212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.677544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.677572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:57.734 [2024-07-22 18:29:09.677587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:57.734 [2024-07-22 18:29:09.677600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.681757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.681821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:57.734 [2024-07-22 18:29:09.681841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.134 ms 00:21:57.734 [2024-07-22 18:29:09.681854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.689578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.689636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:57.734 [2024-07-22 18:29:09.689652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.695 ms 00:21:57.734 [2024-07-22 18:29:09.689668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.701800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.701844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:57.734 [2024-07-22 18:29:09.701862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.037 ms 00:21:57.734 [2024-07-22 18:29:09.701877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.710969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.711016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:57.734 [2024-07-22 18:29:09.711036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.047 ms 00:21:57.734 [2024-07-22 18:29:09.711049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.711207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.711230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:57.734 [2024-07-22 18:29:09.711244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:57.734 [2024-07-22 18:29:09.711270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.734 [2024-07-22 18:29:09.724077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.734 [2024-07-22 18:29:09.724120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:57.735 [2024-07-22 18:29:09.724137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:21:57.735 [2024-07-22 18:29:09.724151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.735 [2024-07-22 18:29:09.736315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.735 [2024-07-22 18:29:09.736356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:57.735 [2024-07-22 18:29:09.736388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.136 ms 00:21:57.735 [2024-07-22 18:29:09.736408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.735 [2024-07-22 18:29:09.748175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.735 [2024-07-22 18:29:09.748218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:57.735 [2024-07-22 18:29:09.748234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.739 ms 00:21:57.735 [2024-07-22 18:29:09.748248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.994 [2024-07-22 18:29:09.760054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.994 [2024-07-22 18:29:09.760097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:57.994 [2024-07-22 18:29:09.760114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.748 ms 00:21:57.994 [2024-07-22 18:29:09.760128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.994 [2024-07-22 18:29:09.760155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:57.994 [2024-07-22 18:29:09.760179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.760993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:57.994 [2024-07-22 18:29:09.761221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:57.995 [2024-07-22 18:29:09.761565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:57.995 [2024-07-22 18:29:09.761581] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133f7ce8-c691-4f98-9963-df4bc40dc329 00:21:57.995 [2024-07-22 18:29:09.761598] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:57.995 [2024-07-22 18:29:09.761610] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:57.995 [2024-07-22 18:29:09.761623] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:57.995 [2024-07-22 18:29:09.761635] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:57.995 [2024-07-22 18:29:09.761649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:57.995 [2024-07-22 18:29:09.761661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:57.995 [2024-07-22 18:29:09.761675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:57.995 [2024-07-22 18:29:09.761698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:57.995 [2024-07-22 18:29:09.761725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:57.995 [2024-07-22 18:29:09.761737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.995 [2024-07-22 18:29:09.761751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:57.995 [2024-07-22 18:29:09.761764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:21:57.995 [2024-07-22 18:29:09.761778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.995 [2024-07-22 18:29:09.778638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.995 [2024-07-22 18:29:09.778728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:57.995 [2024-07-22 18:29:09.778748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.798 ms 00:21:57.995 [2024-07-22 18:29:09.778766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.995 [2024-07-22 18:29:09.779295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.995 [2024-07-22 18:29:09.779333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:57.995 [2024-07-22 18:29:09.779352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:21:57.995 [2024-07-22 18:29:09.779375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.995 [2024-07-22 18:29:09.835749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.995 [2024-07-22 18:29:09.835836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.995 [2024-07-22 18:29:09.835856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.995 [2024-07-22 18:29:09.835874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.995 [2024-07-22 18:29:09.836023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.995 [2024-07-22 18:29:09.836051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.995 [2024-07-22 18:29:09.836067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.995 [2024-07-22 18:29:09.836093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.995 [2024-07-22 18:29:09.836164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.995 [2024-07-22 18:29:09.836192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.995 [2024-07-22 18:29:09.836207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.995 [2024-07-22 18:29:09.836229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.995 [2024-07-22 18:29:09.836258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.995 [2024-07-22 18:29:09.836281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.995 [2024-07-22 18:29:09.836295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.995 [2024-07-22 18:29:09.836312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.995 [2024-07-22 18:29:09.940778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.995 [2024-07-22 18:29:09.940862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.995 [2024-07-22 18:29:09.940883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.995 [2024-07-22 18:29:09.940901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.028235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.254 [2024-07-22 18:29:10.028365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.254 [2024-07-22 18:29:10.028386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.254 [2024-07-22 18:29:10.028404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.028523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.254 [2024-07-22 18:29:10.028552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.254 [2024-07-22 18:29:10.028568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.254 [2024-07-22 18:29:10.028591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.028632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.254 [2024-07-22 18:29:10.028654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.254 [2024-07-22 18:29:10.028668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.254 [2024-07-22 18:29:10.028713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.028859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.254 [2024-07-22 18:29:10.028886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.254 [2024-07-22 18:29:10.028901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.254 [2024-07-22 18:29:10.028918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.028972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.254 [2024-07-22 18:29:10.028999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:58.254 [2024-07-22 18:29:10.029014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.254 [2024-07-22 18:29:10.029030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.029082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.254 [2024-07-22 18:29:10.029112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.254 [2024-07-22 18:29:10.029126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.254 [2024-07-22 18:29:10.029148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.029206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.254 [2024-07-22 18:29:10.029233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.254 [2024-07-22 18:29:10.029248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.254 [2024-07-22 18:29:10.029264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.254 [2024-07-22 18:29:10.029459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.157 ms, result 0 00:21:59.188 18:29:11 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:59.188 18:29:11 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:59.188 [2024-07-22 18:29:11.127796] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:59.188 [2024-07-22 18:29:11.127971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81069 ] 00:21:59.445 [2024-07-22 18:29:11.299042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.702 [2024-07-22 18:29:11.537529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.961 [2024-07-22 18:29:11.890918] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:59.961 [2024-07-22 18:29:11.891008] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:00.220 [2024-07-22 18:29:12.054373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.054442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:00.220 [2024-07-22 18:29:12.054463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:00.220 [2024-07-22 18:29:12.054476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.057875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.057924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.220 [2024-07-22 18:29:12.057943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.368 ms 00:22:00.220 [2024-07-22 18:29:12.057956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.058092] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:00.220 [2024-07-22 18:29:12.059041] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:00.220 [2024-07-22 18:29:12.059084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.059099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.220 [2024-07-22 18:29:12.059112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:22:00.220 [2024-07-22 18:29:12.059124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.061135] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:00.220 [2024-07-22 18:29:12.077976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.078038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:00.220 [2024-07-22 18:29:12.078064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.843 ms 00:22:00.220 [2024-07-22 18:29:12.078077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.078196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.078218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:00.220 [2024-07-22 18:29:12.078232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:00.220 [2024-07-22 18:29:12.078244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.086615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.086666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.220 [2024-07-22 18:29:12.086701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.309 ms 00:22:00.220 [2024-07-22 18:29:12.086714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.086850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.086872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.220 [2024-07-22 18:29:12.086885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:00.220 [2024-07-22 18:29:12.086897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.086943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.086960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:00.220 [2024-07-22 18:29:12.086978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:00.220 [2024-07-22 18:29:12.086989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.087023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:00.220 [2024-07-22 18:29:12.091954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.091991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.220 [2024-07-22 18:29:12.092006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:22:00.220 [2024-07-22 18:29:12.092018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.092108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.220 [2024-07-22 18:29:12.092128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:00.220 [2024-07-22 18:29:12.092142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:00.220 [2024-07-22 18:29:12.092153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.220 [2024-07-22 18:29:12.092185] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:00.220 [2024-07-22 18:29:12.092218] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:00.221 [2024-07-22 18:29:12.092266] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:00.221 [2024-07-22 18:29:12.092290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:00.221 [2024-07-22 18:29:12.092396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:00.221 [2024-07-22 18:29:12.092412] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:00.221 [2024-07-22 18:29:12.092427] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:00.221 [2024-07-22 18:29:12.092442] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:00.221 [2024-07-22 18:29:12.092456] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:00.221 [2024-07-22 18:29:12.092469] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:00.221 [2024-07-22 18:29:12.092485] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:00.221 [2024-07-22 18:29:12.092497] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:00.221 [2024-07-22 18:29:12.092508] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:00.221 [2024-07-22 18:29:12.092521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.221 [2024-07-22 18:29:12.092532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:00.221 [2024-07-22 18:29:12.092544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:22:00.221 [2024-07-22 18:29:12.092555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.221 [2024-07-22 18:29:12.092650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.221 [2024-07-22 18:29:12.092666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:00.221 [2024-07-22 18:29:12.092701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:00.221 [2024-07-22 18:29:12.092722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.221 [2024-07-22 18:29:12.092834] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:00.221 [2024-07-22 18:29:12.092852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:00.221 [2024-07-22 18:29:12.092864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:00.221 [2024-07-22 18:29:12.092876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.221 [2024-07-22 18:29:12.092888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:00.221 [2024-07-22 18:29:12.092899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:00.221 [2024-07-22 18:29:12.092909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:00.221 [2024-07-22 18:29:12.092921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:00.221 [2024-07-22 18:29:12.092932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:00.221 [2024-07-22 18:29:12.092943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:00.221 [2024-07-22 18:29:12.092953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:00.221 [2024-07-22 18:29:12.092964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:00.221 [2024-07-22 18:29:12.092974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:00.221 [2024-07-22 18:29:12.092984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:00.221 [2024-07-22 18:29:12.092995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:00.221 [2024-07-22 18:29:12.093005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:00.221 [2024-07-22 18:29:12.093026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:00.221 [2024-07-22 18:29:12.093050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:00.221 [2024-07-22 18:29:12.093074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.221 [2024-07-22 18:29:12.093095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:00.221 [2024-07-22 18:29:12.093105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.221 [2024-07-22 18:29:12.093126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:00.221 [2024-07-22 18:29:12.093137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.221 [2024-07-22 18:29:12.093158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:00.221 [2024-07-22 18:29:12.093168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.221 [2024-07-22 18:29:12.093189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:00.221 [2024-07-22 18:29:12.093199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:00.221 [2024-07-22 18:29:12.093220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:00.221 [2024-07-22 18:29:12.093230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:00.221 [2024-07-22 18:29:12.093241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:00.221 [2024-07-22 18:29:12.093251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:00.221 [2024-07-22 18:29:12.093261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:00.221 [2024-07-22 18:29:12.093271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:00.221 [2024-07-22 18:29:12.093293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:00.221 [2024-07-22 18:29:12.093304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093315] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:00.221 [2024-07-22 18:29:12.093327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:00.221 [2024-07-22 18:29:12.093338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:00.221 [2024-07-22 18:29:12.093349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.221 [2024-07-22 18:29:12.093361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:00.221 [2024-07-22 18:29:12.093372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:00.221 [2024-07-22 18:29:12.093382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:00.221 [2024-07-22 18:29:12.093393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:00.221 [2024-07-22 18:29:12.093403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:00.221 [2024-07-22 18:29:12.093415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:00.221 [2024-07-22 18:29:12.093427] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:00.221 [2024-07-22 18:29:12.093446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:00.221 [2024-07-22 18:29:12.093459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:00.221 [2024-07-22 18:29:12.093471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:00.221 [2024-07-22 18:29:12.093483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:00.221 [2024-07-22 18:29:12.093495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:00.221 [2024-07-22 18:29:12.093506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:00.221 [2024-07-22 18:29:12.093518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:00.221 [2024-07-22 18:29:12.093529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:00.221 [2024-07-22 18:29:12.093541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:00.221 [2024-07-22 18:29:12.093552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:00.221 [2024-07-22 18:29:12.093564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:00.221 [2024-07-22 18:29:12.093575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:00.221 [2024-07-22 18:29:12.093587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:00.221 [2024-07-22 18:29:12.093599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:00.221 [2024-07-22 18:29:12.093611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:00.221 [2024-07-22 18:29:12.093623] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:00.221 [2024-07-22 18:29:12.093636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:00.221 [2024-07-22 18:29:12.093649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:00.221 [2024-07-22 18:29:12.093661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:00.221 [2024-07-22 18:29:12.093673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:00.221 [2024-07-22 18:29:12.093700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:00.221 [2024-07-22 18:29:12.093714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.221 [2024-07-22 18:29:12.093726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:00.221 [2024-07-22 18:29:12.093738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:22:00.222 [2024-07-22 18:29:12.093750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.145859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.145945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.222 [2024-07-22 18:29:12.145967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.029 ms 00:22:00.222 [2024-07-22 18:29:12.145986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.146197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.146219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:00.222 [2024-07-22 18:29:12.146240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:00.222 [2024-07-22 18:29:12.146252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.191773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.191834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.222 [2024-07-22 18:29:12.191853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.483 ms 00:22:00.222 [2024-07-22 18:29:12.191866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.192049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.192081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.222 [2024-07-22 18:29:12.192103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:00.222 [2024-07-22 18:29:12.192120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.192705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.192741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.222 [2024-07-22 18:29:12.192757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:22:00.222 [2024-07-22 18:29:12.192769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.192962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.192994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.222 [2024-07-22 18:29:12.193008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:22:00.222 [2024-07-22 18:29:12.193020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.213291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.213337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.222 [2024-07-22 18:29:12.213355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.237 ms 00:22:00.222 [2024-07-22 18:29:12.213367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.222 [2024-07-22 18:29:12.230229] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:00.222 [2024-07-22 18:29:12.230287] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:00.222 [2024-07-22 18:29:12.230323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.222 [2024-07-22 18:29:12.230336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:00.222 [2024-07-22 18:29:12.230349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.791 ms 00:22:00.222 [2024-07-22 18:29:12.230361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.480 [2024-07-22 18:29:12.260313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.480 [2024-07-22 18:29:12.260372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:00.480 [2024-07-22 18:29:12.260407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.855 ms 00:22:00.480 [2024-07-22 18:29:12.260419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.480 [2024-07-22 18:29:12.276546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.480 [2024-07-22 18:29:12.276586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:00.480 [2024-07-22 18:29:12.276618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.012 ms 00:22:00.480 [2024-07-22 18:29:12.276630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.480 [2024-07-22 18:29:12.292048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.292090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:00.481 [2024-07-22 18:29:12.292122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.299 ms 00:22:00.481 [2024-07-22 18:29:12.292134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.293082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.293119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:00.481 [2024-07-22 18:29:12.293135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:22:00.481 [2024-07-22 18:29:12.293148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.369709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.369796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:00.481 [2024-07-22 18:29:12.369833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.525 ms 00:22:00.481 [2024-07-22 18:29:12.369846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.382476] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:00.481 [2024-07-22 18:29:12.403663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.403784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:00.481 [2024-07-22 18:29:12.403805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.630 ms 00:22:00.481 [2024-07-22 18:29:12.403817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.403967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.403989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:00.481 [2024-07-22 18:29:12.404008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:00.481 [2024-07-22 18:29:12.404020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.404097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.404114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:00.481 [2024-07-22 18:29:12.404127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:00.481 [2024-07-22 18:29:12.404138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.404174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.404188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:00.481 [2024-07-22 18:29:12.404201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:00.481 [2024-07-22 18:29:12.404218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.404258] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:00.481 [2024-07-22 18:29:12.404275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.404287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:00.481 [2024-07-22 18:29:12.404299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:00.481 [2024-07-22 18:29:12.404311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.436452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.436514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:00.481 [2024-07-22 18:29:12.436557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.108 ms 00:22:00.481 [2024-07-22 18:29:12.436569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.436743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.481 [2024-07-22 18:29:12.436765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:00.481 [2024-07-22 18:29:12.436779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:00.481 [2024-07-22 18:29:12.436791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.481 [2024-07-22 18:29:12.437933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:00.481 [2024-07-22 18:29:12.441928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.217 ms, result 0 00:22:00.481 [2024-07-22 18:29:12.442794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.481 [2024-07-22 18:29:12.458887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:10.345  Copying: 28/256 [MB] (28 MBps) Copying: 54/256 [MB] (25 MBps) Copying: 81/256 [MB] (26 MBps) Copying: 107/256 [MB] (25 MBps) Copying: 132/256 [MB] (25 MBps) Copying: 158/256 [MB] (25 MBps) Copying: 183/256 [MB] (25 MBps) Copying: 210/256 [MB] (26 MBps) Copying: 234/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-22 18:29:22.330144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.345 [2024-07-22 18:29:22.343013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.345 [2024-07-22 18:29:22.343196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.345 [2024-07-22 18:29:22.343325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:10.345 [2024-07-22 18:29:22.343378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.345 [2024-07-22 18:29:22.343483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:10.345 [2024-07-22 18:29:22.347256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.345 [2024-07-22 18:29:22.347434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.345 [2024-07-22 18:29:22.347585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:22:10.345 [2024-07-22 18:29:22.347740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.345 [2024-07-22 18:29:22.348078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.345 [2024-07-22 18:29:22.348112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.345 [2024-07-22 18:29:22.348128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:10.345 [2024-07-22 18:29:22.348140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.345 [2024-07-22 18:29:22.351815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.345 [2024-07-22 18:29:22.351848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.345 [2024-07-22 18:29:22.351864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:22:10.345 [2024-07-22 18:29:22.351883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.603 [2024-07-22 18:29:22.359373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.603 [2024-07-22 18:29:22.359442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.603 [2024-07-22 18:29:22.359468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.463 ms 00:22:10.603 [2024-07-22 18:29:22.359486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.603 [2024-07-22 18:29:22.390411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.603 [2024-07-22 18:29:22.390469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.603 [2024-07-22 18:29:22.390489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.839 ms 00:22:10.603 [2024-07-22 18:29:22.390502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.603 [2024-07-22 18:29:22.408540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.603 [2024-07-22 18:29:22.408594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.603 [2024-07-22 18:29:22.408613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:22:10.603 [2024-07-22 18:29:22.408626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.604 [2024-07-22 18:29:22.408835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.604 [2024-07-22 18:29:22.408857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.604 [2024-07-22 18:29:22.408871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:10.604 [2024-07-22 18:29:22.408883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.604 [2024-07-22 18:29:22.440246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.604 [2024-07-22 18:29:22.440296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:10.604 [2024-07-22 18:29:22.440315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.337 ms 00:22:10.604 [2024-07-22 18:29:22.440327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.604 [2024-07-22 18:29:22.471468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.604 [2024-07-22 18:29:22.471525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:10.604 [2024-07-22 18:29:22.471544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.064 ms 00:22:10.604 [2024-07-22 18:29:22.471568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.604 [2024-07-22 18:29:22.502189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.604 [2024-07-22 18:29:22.502236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.604 [2024-07-22 18:29:22.502255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.528 ms 00:22:10.604 [2024-07-22 18:29:22.502266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.604 [2024-07-22 18:29:22.537566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.604 [2024-07-22 18:29:22.537644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.604 [2024-07-22 18:29:22.537670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.188 ms 00:22:10.604 [2024-07-22 18:29:22.537701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.604 [2024-07-22 18:29:22.537804] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.604 [2024-07-22 18:29:22.537839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.537870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.537887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.537905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.537922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.537942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.537961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.537980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.538984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.604 [2024-07-22 18:29:22.539120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.605 [2024-07-22 18:29:22.539527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.605 [2024-07-22 18:29:22.539539] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133f7ce8-c691-4f98-9963-df4bc40dc329 00:22:10.605 [2024-07-22 18:29:22.539551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:10.605 [2024-07-22 18:29:22.539562] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:10.605 [2024-07-22 18:29:22.539589] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:10.605 [2024-07-22 18:29:22.539603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:10.605 [2024-07-22 18:29:22.539613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.605 [2024-07-22 18:29:22.539625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.605 [2024-07-22 18:29:22.539636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.605 [2024-07-22 18:29:22.539647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.605 [2024-07-22 18:29:22.539657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.605 [2024-07-22 18:29:22.539669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.605 [2024-07-22 18:29:22.539695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.605 [2024-07-22 18:29:22.539710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:22:10.605 [2024-07-22 18:29:22.539728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.605 [2024-07-22 18:29:22.557979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.605 [2024-07-22 18:29:22.558146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.605 [2024-07-22 18:29:22.558272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.213 ms 00:22:10.605 [2024-07-22 18:29:22.558439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.605 [2024-07-22 18:29:22.559069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.605 [2024-07-22 18:29:22.559207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.605 [2024-07-22 18:29:22.559331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:22:10.605 [2024-07-22 18:29:22.559382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.605 [2024-07-22 18:29:22.601023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.605 [2024-07-22 18:29:22.601264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.605 [2024-07-22 18:29:22.601386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.605 [2024-07-22 18:29:22.601447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.605 [2024-07-22 18:29:22.601696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.605 [2024-07-22 18:29:22.601832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.605 [2024-07-22 18:29:22.601954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.605 [2024-07-22 18:29:22.602006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.605 [2024-07-22 18:29:22.602221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.605 [2024-07-22 18:29:22.602346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.605 [2024-07-22 18:29:22.602476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.605 [2024-07-22 18:29:22.602528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.605 [2024-07-22 18:29:22.602627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.605 [2024-07-22 18:29:22.602794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.605 [2024-07-22 18:29:22.602908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.605 [2024-07-22 18:29:22.603054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.711466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.711689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.863 [2024-07-22 18:29:22.711810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.711956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.798553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.798783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.863 [2024-07-22 18:29:22.798938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.799091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.799316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.799480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.863 [2024-07-22 18:29:22.799666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.799704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.799754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.799771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.863 [2024-07-22 18:29:22.799783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.799795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.799940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.799961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.863 [2024-07-22 18:29:22.799974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.799985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.800045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.800064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.863 [2024-07-22 18:29:22.800077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.800090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.800148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.800166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.863 [2024-07-22 18:29:22.800178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.800189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.800248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.863 [2024-07-22 18:29:22.800266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.863 [2024-07-22 18:29:22.800278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.863 [2024-07-22 18:29:22.800290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.863 [2024-07-22 18:29:22.800471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 457.447 ms, result 0 00:22:12.238 00:22:12.238 00:22:12.238 18:29:24 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:12.238 18:29:24 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:12.803 18:29:24 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:12.803 [2024-07-22 18:29:24.682233] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:12.803 [2024-07-22 18:29:24.682448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81207 ] 00:22:13.061 [2024-07-22 18:29:24.856323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.317 [2024-07-22 18:29:25.133516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.577 [2024-07-22 18:29:25.487270] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.577 [2024-07-22 18:29:25.487351] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.836 [2024-07-22 18:29:25.651660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.651744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.836 [2024-07-22 18:29:25.651766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:13.836 [2024-07-22 18:29:25.651779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.655112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.655157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.836 [2024-07-22 18:29:25.655175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:22:13.836 [2024-07-22 18:29:25.655187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.655324] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.836 [2024-07-22 18:29:25.656463] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.836 [2024-07-22 18:29:25.656509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.656525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.836 [2024-07-22 18:29:25.656539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:22:13.836 [2024-07-22 18:29:25.656551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.658437] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:13.836 [2024-07-22 18:29:25.675122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.675176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:13.836 [2024-07-22 18:29:25.675202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.685 ms 00:22:13.836 [2024-07-22 18:29:25.675215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.675353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.675376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:13.836 [2024-07-22 18:29:25.675407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:13.836 [2024-07-22 18:29:25.675427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.684106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.684175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.836 [2024-07-22 18:29:25.684193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.591 ms 00:22:13.836 [2024-07-22 18:29:25.684205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.684365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.684394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.836 [2024-07-22 18:29:25.684408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:13.836 [2024-07-22 18:29:25.684419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.684469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.684487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.836 [2024-07-22 18:29:25.684504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:13.836 [2024-07-22 18:29:25.684515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.684551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:13.836 [2024-07-22 18:29:25.689641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.689690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.836 [2024-07-22 18:29:25.689707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.101 ms 00:22:13.836 [2024-07-22 18:29:25.689719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.689824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.689845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.836 [2024-07-22 18:29:25.689858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:13.836 [2024-07-22 18:29:25.689869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.689901] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:13.836 [2024-07-22 18:29:25.689933] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:13.836 [2024-07-22 18:29:25.689982] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:13.836 [2024-07-22 18:29:25.690004] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:13.836 [2024-07-22 18:29:25.690109] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.836 [2024-07-22 18:29:25.690130] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.836 [2024-07-22 18:29:25.690145] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:13.836 [2024-07-22 18:29:25.690160] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690174] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690187] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:13.836 [2024-07-22 18:29:25.690204] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.836 [2024-07-22 18:29:25.690215] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.836 [2024-07-22 18:29:25.690227] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.836 [2024-07-22 18:29:25.690240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.690252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.836 [2024-07-22 18:29:25.690263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:22:13.836 [2024-07-22 18:29:25.690275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.690373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.690391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.836 [2024-07-22 18:29:25.690403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:13.836 [2024-07-22 18:29:25.690420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.690531] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.836 [2024-07-22 18:29:25.690549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.836 [2024-07-22 18:29:25.690562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.836 [2024-07-22 18:29:25.690596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.836 [2024-07-22 18:29:25.690630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.836 [2024-07-22 18:29:25.690651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.836 [2024-07-22 18:29:25.690662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:13.836 [2024-07-22 18:29:25.690672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.836 [2024-07-22 18:29:25.690699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.836 [2024-07-22 18:29:25.690711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:13.836 [2024-07-22 18:29:25.690721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.836 [2024-07-22 18:29:25.690745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.836 [2024-07-22 18:29:25.690793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.836 [2024-07-22 18:29:25.690825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.836 [2024-07-22 18:29:25.690857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.836 [2024-07-22 18:29:25.690888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.836 [2024-07-22 18:29:25.690909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.836 [2024-07-22 18:29:25.690920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:13.836 [2024-07-22 18:29:25.690930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.836 [2024-07-22 18:29:25.690940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.836 [2024-07-22 18:29:25.690950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:13.836 [2024-07-22 18:29:25.690961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.836 [2024-07-22 18:29:25.690972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.836 [2024-07-22 18:29:25.690982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:13.836 [2024-07-22 18:29:25.690992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.836 [2024-07-22 18:29:25.691003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.836 [2024-07-22 18:29:25.691013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:13.836 [2024-07-22 18:29:25.691025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.836 [2024-07-22 18:29:25.691036] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.836 [2024-07-22 18:29:25.691048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.836 [2024-07-22 18:29:25.691059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.836 [2024-07-22 18:29:25.691070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.836 [2024-07-22 18:29:25.691082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.836 [2024-07-22 18:29:25.691094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.836 [2024-07-22 18:29:25.691105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.836 [2024-07-22 18:29:25.691116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.836 [2024-07-22 18:29:25.691127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.836 [2024-07-22 18:29:25.691138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.836 [2024-07-22 18:29:25.691150] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.836 [2024-07-22 18:29:25.691170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.836 [2024-07-22 18:29:25.691183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:13.836 [2024-07-22 18:29:25.691195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:13.836 [2024-07-22 18:29:25.691207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:13.836 [2024-07-22 18:29:25.691218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:13.836 [2024-07-22 18:29:25.691230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:13.836 [2024-07-22 18:29:25.691241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:13.836 [2024-07-22 18:29:25.691252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:13.836 [2024-07-22 18:29:25.691263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:13.836 [2024-07-22 18:29:25.691275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:13.836 [2024-07-22 18:29:25.691286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:13.836 [2024-07-22 18:29:25.691298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:13.836 [2024-07-22 18:29:25.691309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:13.836 [2024-07-22 18:29:25.691321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:13.836 [2024-07-22 18:29:25.691334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:13.836 [2024-07-22 18:29:25.691346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.836 [2024-07-22 18:29:25.691359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.836 [2024-07-22 18:29:25.691372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.836 [2024-07-22 18:29:25.691384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.836 [2024-07-22 18:29:25.691417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.836 [2024-07-22 18:29:25.691438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.836 [2024-07-22 18:29:25.691458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.691478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.836 [2024-07-22 18:29:25.691497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:22:13.836 [2024-07-22 18:29:25.691514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.746657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.746933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.836 [2024-07-22 18:29:25.747056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.056 ms 00:22:13.836 [2024-07-22 18:29:25.747116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.747487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.747645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.836 [2024-07-22 18:29:25.747811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:13.836 [2024-07-22 18:29:25.747865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.791057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.791301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.836 [2024-07-22 18:29:25.791437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.067 ms 00:22:13.836 [2024-07-22 18:29:25.791504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.836 [2024-07-22 18:29:25.791763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.836 [2024-07-22 18:29:25.791884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.837 [2024-07-22 18:29:25.792006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:13.837 [2024-07-22 18:29:25.792110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.837 [2024-07-22 18:29:25.792753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.837 [2024-07-22 18:29:25.792886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.837 [2024-07-22 18:29:25.792992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:22:13.837 [2024-07-22 18:29:25.793090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.837 [2024-07-22 18:29:25.793315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.837 [2024-07-22 18:29:25.793384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.837 [2024-07-22 18:29:25.793499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:22:13.837 [2024-07-22 18:29:25.793550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.837 [2024-07-22 18:29:25.812531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.837 [2024-07-22 18:29:25.812793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.837 [2024-07-22 18:29:25.812930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.914 ms 00:22:13.837 [2024-07-22 18:29:25.812982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.837 [2024-07-22 18:29:25.830089] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:13.837 [2024-07-22 18:29:25.830304] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.837 [2024-07-22 18:29:25.830441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.837 [2024-07-22 18:29:25.830484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.837 [2024-07-22 18:29:25.830610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.144 ms 00:22:13.837 [2024-07-22 18:29:25.830652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:25.860196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:25.860429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:14.094 [2024-07-22 18:29:25.860550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.358 ms 00:22:14.094 [2024-07-22 18:29:25.860573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:25.876781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:25.876857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:14.094 [2024-07-22 18:29:25.876878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:22:14.094 [2024-07-22 18:29:25.876889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:25.892973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:25.893028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:14.094 [2024-07-22 18:29:25.893048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.923 ms 00:22:14.094 [2024-07-22 18:29:25.893060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:25.894021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:25.894063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:14.094 [2024-07-22 18:29:25.894081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:22:14.094 [2024-07-22 18:29:25.894093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:25.971641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:25.971720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:14.094 [2024-07-22 18:29:25.971740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.510 ms 00:22:14.094 [2024-07-22 18:29:25.971753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:25.984972] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:14.094 [2024-07-22 18:29:26.006412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:26.006493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:14.094 [2024-07-22 18:29:26.006515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.496 ms 00:22:14.094 [2024-07-22 18:29:26.006528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:26.006675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:26.006722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:14.094 [2024-07-22 18:29:26.006755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:14.094 [2024-07-22 18:29:26.006768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:26.006846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:26.006864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.094 [2024-07-22 18:29:26.006876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:14.094 [2024-07-22 18:29:26.006888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:26.006924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:26.006939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.094 [2024-07-22 18:29:26.006952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:14.094 [2024-07-22 18:29:26.006969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:26.007009] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:14.094 [2024-07-22 18:29:26.007027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:26.007040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:14.094 [2024-07-22 18:29:26.007054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:14.094 [2024-07-22 18:29:26.007067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:26.038807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:26.038875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.094 [2024-07-22 18:29:26.038904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.704 ms 00:22:14.094 [2024-07-22 18:29:26.038917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:26.039078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.094 [2024-07-22 18:29:26.039099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.094 [2024-07-22 18:29:26.039113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:14.094 [2024-07-22 18:29:26.039125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.094 [2024-07-22 18:29:26.040358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:14.094 [2024-07-22 18:29:26.044853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.355 ms, result 0 00:22:14.094 [2024-07-22 18:29:26.045786] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:14.094 [2024-07-22 18:29:26.061984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:14.352  Copying: 4096/4096 [kB] (average 26 MBps)[2024-07-22 18:29:26.217184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:14.352 [2024-07-22 18:29:26.230433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.230511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:14.352 [2024-07-22 18:29:26.230534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:14.352 [2024-07-22 18:29:26.230546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.230595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:14.352 [2024-07-22 18:29:26.234325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.234381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:14.352 [2024-07-22 18:29:26.234399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.705 ms 00:22:14.352 [2024-07-22 18:29:26.234410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.236313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.236360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:14.352 [2024-07-22 18:29:26.236380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.861 ms 00:22:14.352 [2024-07-22 18:29:26.236392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.240345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.240394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:14.352 [2024-07-22 18:29:26.240412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:22:14.352 [2024-07-22 18:29:26.240436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.247888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.247955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:14.352 [2024-07-22 18:29:26.247974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.358 ms 00:22:14.352 [2024-07-22 18:29:26.247986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.280947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.281016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:14.352 [2024-07-22 18:29:26.281037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.872 ms 00:22:14.352 [2024-07-22 18:29:26.281049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.298978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.299046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:14.352 [2024-07-22 18:29:26.299068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.823 ms 00:22:14.352 [2024-07-22 18:29:26.299080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.299308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.299331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:14.352 [2024-07-22 18:29:26.299345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:22:14.352 [2024-07-22 18:29:26.299357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.331184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.331251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:14.352 [2024-07-22 18:29:26.331271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.799 ms 00:22:14.352 [2024-07-22 18:29:26.331283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.352 [2024-07-22 18:29:26.361995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.352 [2024-07-22 18:29:26.362062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:14.352 [2024-07-22 18:29:26.362083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.614 ms 00:22:14.352 [2024-07-22 18:29:26.362094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.610 [2024-07-22 18:29:26.392546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.610 [2024-07-22 18:29:26.392603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:14.610 [2024-07-22 18:29:26.392624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.358 ms 00:22:14.610 [2024-07-22 18:29:26.392636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.610 [2024-07-22 18:29:26.423427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.610 [2024-07-22 18:29:26.423514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:14.611 [2024-07-22 18:29:26.423546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.627 ms 00:22:14.611 [2024-07-22 18:29:26.423558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.611 [2024-07-22 18:29:26.423666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:14.611 [2024-07-22 18:29:26.423716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.423994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:14.611 [2024-07-22 18:29:26.424775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:14.612 [2024-07-22 18:29:26.424954] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:14.612 [2024-07-22 18:29:26.424967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133f7ce8-c691-4f98-9963-df4bc40dc329 00:22:14.612 [2024-07-22 18:29:26.424979] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:14.612 [2024-07-22 18:29:26.424990] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:14.612 [2024-07-22 18:29:26.425016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:14.612 [2024-07-22 18:29:26.425029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:14.612 [2024-07-22 18:29:26.425039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:14.612 [2024-07-22 18:29:26.425051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:14.612 [2024-07-22 18:29:26.425062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:14.612 [2024-07-22 18:29:26.425073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:14.612 [2024-07-22 18:29:26.425083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:14.612 [2024-07-22 18:29:26.425094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.612 [2024-07-22 18:29:26.425106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:14.612 [2024-07-22 18:29:26.425119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:22:14.612 [2024-07-22 18:29:26.425135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.612 [2024-07-22 18:29:26.442204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.612 [2024-07-22 18:29:26.442255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:14.612 [2024-07-22 18:29:26.442273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:22:14.612 [2024-07-22 18:29:26.442285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.612 [2024-07-22 18:29:26.442808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.612 [2024-07-22 18:29:26.442832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:14.612 [2024-07-22 18:29:26.442854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:22:14.612 [2024-07-22 18:29:26.442866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.612 [2024-07-22 18:29:26.484273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.612 [2024-07-22 18:29:26.484335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:14.612 [2024-07-22 18:29:26.484356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.612 [2024-07-22 18:29:26.484368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.612 [2024-07-22 18:29:26.484496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.612 [2024-07-22 18:29:26.484514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:14.612 [2024-07-22 18:29:26.484536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.612 [2024-07-22 18:29:26.484547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.612 [2024-07-22 18:29:26.484615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.612 [2024-07-22 18:29:26.484635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:14.612 [2024-07-22 18:29:26.484648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.612 [2024-07-22 18:29:26.484660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.612 [2024-07-22 18:29:26.484708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.612 [2024-07-22 18:29:26.484725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:14.612 [2024-07-22 18:29:26.484738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.612 [2024-07-22 18:29:26.484756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.612 [2024-07-22 18:29:26.592192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.612 [2024-07-22 18:29:26.592261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.612 [2024-07-22 18:29:26.592281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.612 [2024-07-22 18:29:26.592293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.683482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.918 [2024-07-22 18:29:26.683568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.918 [2024-07-22 18:29:26.683601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.918 [2024-07-22 18:29:26.683613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.683730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.918 [2024-07-22 18:29:26.683758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:14.918 [2024-07-22 18:29:26.683772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.918 [2024-07-22 18:29:26.683783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.683823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.918 [2024-07-22 18:29:26.683838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:14.918 [2024-07-22 18:29:26.683850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.918 [2024-07-22 18:29:26.683861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.683996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.918 [2024-07-22 18:29:26.684017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:14.918 [2024-07-22 18:29:26.684030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.918 [2024-07-22 18:29:26.684041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.684094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.918 [2024-07-22 18:29:26.684119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:14.918 [2024-07-22 18:29:26.684132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.918 [2024-07-22 18:29:26.684144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.684202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.918 [2024-07-22 18:29:26.684219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:14.918 [2024-07-22 18:29:26.684231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.918 [2024-07-22 18:29:26.684242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.684299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.918 [2024-07-22 18:29:26.684316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:14.918 [2024-07-22 18:29:26.684328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.918 [2024-07-22 18:29:26.684343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.918 [2024-07-22 18:29:26.684526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.109 ms, result 0 00:22:15.878 00:22:15.878 00:22:15.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.878 18:29:27 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81242 00:22:15.878 18:29:27 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81242 00:22:15.878 18:29:27 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:15.878 18:29:27 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81242 ']' 00:22:15.878 18:29:27 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.878 18:29:27 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.878 18:29:27 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.878 18:29:27 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.878 18:29:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:16.137 [2024-07-22 18:29:27.957178] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:16.137 [2024-07-22 18:29:27.957615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81242 ] 00:22:16.137 [2024-07-22 18:29:28.133722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.396 [2024-07-22 18:29:28.393227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.331 18:29:29 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.331 18:29:29 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:22:17.331 18:29:29 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:17.589 [2024-07-22 18:29:29.456445] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.589 [2024-07-22 18:29:29.456778] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.851 [2024-07-22 18:29:29.638253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.851 [2024-07-22 18:29:29.638327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:17.851 [2024-07-22 18:29:29.638349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:17.851 [2024-07-22 18:29:29.638364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.851 [2024-07-22 18:29:29.641776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.851 [2024-07-22 18:29:29.641824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:17.851 [2024-07-22 18:29:29.641843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:22:17.851 [2024-07-22 18:29:29.641857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.851 [2024-07-22 18:29:29.642002] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:17.851 [2024-07-22 18:29:29.643006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:17.851 [2024-07-22 18:29:29.643048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.851 [2024-07-22 18:29:29.643067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:17.851 [2024-07-22 18:29:29.643080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:22:17.851 [2024-07-22 18:29:29.643095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.851 [2024-07-22 18:29:29.645103] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:17.852 [2024-07-22 18:29:29.666188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.666294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:17.852 [2024-07-22 18:29:29.666337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.064 ms 00:22:17.852 [2024-07-22 18:29:29.666362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.666630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.666666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:17.852 [2024-07-22 18:29:29.666741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:17.852 [2024-07-22 18:29:29.666764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.676965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.677055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:17.852 [2024-07-22 18:29:29.677104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.076 ms 00:22:17.852 [2024-07-22 18:29:29.677129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.677411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.677446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:17.852 [2024-07-22 18:29:29.677485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:22:17.852 [2024-07-22 18:29:29.677507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.677594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.677620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:17.852 [2024-07-22 18:29:29.677645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:17.852 [2024-07-22 18:29:29.677667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.677772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:17.852 [2024-07-22 18:29:29.684509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.684574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:17.852 [2024-07-22 18:29:29.684604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.758 ms 00:22:17.852 [2024-07-22 18:29:29.684629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.684764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.684812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:17.852 [2024-07-22 18:29:29.684837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:17.852 [2024-07-22 18:29:29.684864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.684913] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:17.852 [2024-07-22 18:29:29.684962] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:17.852 [2024-07-22 18:29:29.685034] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:17.852 [2024-07-22 18:29:29.685080] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:17.852 [2024-07-22 18:29:29.685216] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:17.852 [2024-07-22 18:29:29.685256] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:17.852 [2024-07-22 18:29:29.685286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:17.852 [2024-07-22 18:29:29.685314] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:17.852 [2024-07-22 18:29:29.685339] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:17.852 [2024-07-22 18:29:29.685367] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:17.852 [2024-07-22 18:29:29.685393] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:17.852 [2024-07-22 18:29:29.685417] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:17.852 [2024-07-22 18:29:29.685437] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:17.852 [2024-07-22 18:29:29.685464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.685483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:17.852 [2024-07-22 18:29:29.685507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:22:17.852 [2024-07-22 18:29:29.685528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.685664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.852 [2024-07-22 18:29:29.685714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:17.852 [2024-07-22 18:29:29.685741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:17.852 [2024-07-22 18:29:29.685760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.852 [2024-07-22 18:29:29.685926] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:17.852 [2024-07-22 18:29:29.685959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:17.852 [2024-07-22 18:29:29.685993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:17.852 [2024-07-22 18:29:29.686056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:17.852 [2024-07-22 18:29:29.686125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:17.852 [2024-07-22 18:29:29.686169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:17.852 [2024-07-22 18:29:29.686190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:17.852 [2024-07-22 18:29:29.686211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:17.852 [2024-07-22 18:29:29.686228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:17.852 [2024-07-22 18:29:29.686249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:17.852 [2024-07-22 18:29:29.686268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:17.852 [2024-07-22 18:29:29.686310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:17.852 [2024-07-22 18:29:29.686371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:17.852 [2024-07-22 18:29:29.686430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:17.852 [2024-07-22 18:29:29.686497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:17.852 [2024-07-22 18:29:29.686587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:17.852 [2024-07-22 18:29:29.686659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:17.852 [2024-07-22 18:29:29.686733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:17.852 [2024-07-22 18:29:29.686754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:17.852 [2024-07-22 18:29:29.686777] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:17.852 [2024-07-22 18:29:29.686796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:17.852 [2024-07-22 18:29:29.686820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:17.852 [2024-07-22 18:29:29.686837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:17.852 [2024-07-22 18:29:29.686879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:17.852 [2024-07-22 18:29:29.686901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.852 [2024-07-22 18:29:29.686920] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:17.852 [2024-07-22 18:29:29.686949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:17.852 [2024-07-22 18:29:29.686971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:17.852 [2024-07-22 18:29:29.686994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.852 [2024-07-22 18:29:29.687012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:17.852 [2024-07-22 18:29:29.687032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:17.852 [2024-07-22 18:29:29.687050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:17.852 [2024-07-22 18:29:29.687073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:17.852 [2024-07-22 18:29:29.687092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:17.852 [2024-07-22 18:29:29.687115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:17.852 [2024-07-22 18:29:29.687137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:17.852 [2024-07-22 18:29:29.687163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:17.853 [2024-07-22 18:29:29.687185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:17.853 [2024-07-22 18:29:29.687216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:17.853 [2024-07-22 18:29:29.687236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:17.853 [2024-07-22 18:29:29.687260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:17.853 [2024-07-22 18:29:29.687281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:17.853 [2024-07-22 18:29:29.687308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:17.853 [2024-07-22 18:29:29.687331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:17.853 [2024-07-22 18:29:29.687355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:17.853 [2024-07-22 18:29:29.687374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:17.853 [2024-07-22 18:29:29.687412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:17.853 [2024-07-22 18:29:29.687432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:17.853 [2024-07-22 18:29:29.687454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:17.853 [2024-07-22 18:29:29.687475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:17.853 [2024-07-22 18:29:29.687500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:17.853 [2024-07-22 18:29:29.687520] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:17.853 [2024-07-22 18:29:29.687546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:17.853 [2024-07-22 18:29:29.687567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:17.853 [2024-07-22 18:29:29.687592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:17.853 [2024-07-22 18:29:29.687612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:17.853 [2024-07-22 18:29:29.687637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:17.853 [2024-07-22 18:29:29.687659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.687700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:17.853 [2024-07-22 18:29:29.687725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:22:17.853 [2024-07-22 18:29:29.687747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.738026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.738099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.853 [2024-07-22 18:29:29.738121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.115 ms 00:22:17.853 [2024-07-22 18:29:29.738141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.738350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.738377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:17.853 [2024-07-22 18:29:29.738393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:17.853 [2024-07-22 18:29:29.738407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.782237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.782316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.853 [2024-07-22 18:29:29.782337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.794 ms 00:22:17.853 [2024-07-22 18:29:29.782352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.782483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.782507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.853 [2024-07-22 18:29:29.782522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:17.853 [2024-07-22 18:29:29.782536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.783123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.783160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.853 [2024-07-22 18:29:29.783181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:22:17.853 [2024-07-22 18:29:29.783196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.783376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.783410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.853 [2024-07-22 18:29:29.783424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:22:17.853 [2024-07-22 18:29:29.783438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.804157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.804226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.853 [2024-07-22 18:29:29.804246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.686 ms 00:22:17.853 [2024-07-22 18:29:29.804261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.820972] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:17.853 [2024-07-22 18:29:29.821030] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:17.853 [2024-07-22 18:29:29.821051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.821067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:17.853 [2024-07-22 18:29:29.821082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.606 ms 00:22:17.853 [2024-07-22 18:29:29.821096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.853 [2024-07-22 18:29:29.850353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.853 [2024-07-22 18:29:29.850432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:17.853 [2024-07-22 18:29:29.850455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.147 ms 00:22:17.853 [2024-07-22 18:29:29.850470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:29.866771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:29.866836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:18.123 [2024-07-22 18:29:29.866869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.163 ms 00:22:18.123 [2024-07-22 18:29:29.866888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:29.882622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:29.882719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:18.123 [2024-07-22 18:29:29.882742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.606 ms 00:22:18.123 [2024-07-22 18:29:29.882758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:29.883780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:29.883819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.123 [2024-07-22 18:29:29.883836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:22:18.123 [2024-07-22 18:29:29.883851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:29.973842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:29.973934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:18.123 [2024-07-22 18:29:29.973957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.955 ms 00:22:18.123 [2024-07-22 18:29:29.973973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:29.987676] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:18.123 [2024-07-22 18:29:30.009442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:30.009527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:18.123 [2024-07-22 18:29:30.009556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.289 ms 00:22:18.123 [2024-07-22 18:29:30.009573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:30.009746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:30.009769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:18.123 [2024-07-22 18:29:30.009799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:18.123 [2024-07-22 18:29:30.009812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:30.009892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:30.009909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:18.123 [2024-07-22 18:29:30.009924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:18.123 [2024-07-22 18:29:30.009936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-07-22 18:29:30.009978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-07-22 18:29:30.009993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.124 [2024-07-22 18:29:30.010011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:18.124 [2024-07-22 18:29:30.010023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.124 [2024-07-22 18:29:30.010067] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:18.124 [2024-07-22 18:29:30.010083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.124 [2024-07-22 18:29:30.010100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:18.124 [2024-07-22 18:29:30.010113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:18.124 [2024-07-22 18:29:30.010127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.124 [2024-07-22 18:29:30.042256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.124 [2024-07-22 18:29:30.042324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.124 [2024-07-22 18:29:30.042346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.093 ms 00:22:18.124 [2024-07-22 18:29:30.042362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.124 [2024-07-22 18:29:30.042531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.124 [2024-07-22 18:29:30.042559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.124 [2024-07-22 18:29:30.042574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:18.124 [2024-07-22 18:29:30.042596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.124 [2024-07-22 18:29:30.043821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:18.124 [2024-07-22 18:29:30.048360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.174 ms, result 0 00:22:18.124 [2024-07-22 18:29:30.049486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:18.124 Some configs were skipped because the RPC state that can call them passed over. 00:22:18.124 18:29:30 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:18.382 [2024-07-22 18:29:30.307176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.382 [2024-07-22 18:29:30.307474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:18.382 [2024-07-22 18:29:30.307627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:22:18.383 [2024-07-22 18:29:30.307785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.383 [2024-07-22 18:29:30.307960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.325 ms, result 0 00:22:18.383 true 00:22:18.383 18:29:30 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:18.641 [2024-07-22 18:29:30.587516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.641 [2024-07-22 18:29:30.587611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:18.641 [2024-07-22 18:29:30.587649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:22:18.641 [2024-07-22 18:29:30.587672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.641 [2024-07-22 18:29:30.587770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.651 ms, result 0 00:22:18.641 true 00:22:18.641 18:29:30 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81242 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81242 ']' 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81242 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81242 00:22:18.641 killing process with pid 81242 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81242' 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81242 00:22:18.641 18:29:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81242 00:22:20.015 [2024-07-22 18:29:31.693161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.693246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:20.015 [2024-07-22 18:29:31.693273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:20.015 [2024-07-22 18:29:31.693287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.693325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:20.015 [2024-07-22 18:29:31.696904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.696947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:20.015 [2024-07-22 18:29:31.696963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.555 ms 00:22:20.015 [2024-07-22 18:29:31.696979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.697327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.697360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:20.015 [2024-07-22 18:29:31.697376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:22:20.015 [2024-07-22 18:29:31.697390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.701399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.701451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:20.015 [2024-07-22 18:29:31.701471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:22:20.015 [2024-07-22 18:29:31.701486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.708799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.708843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:20.015 [2024-07-22 18:29:31.708859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.265 ms 00:22:20.015 [2024-07-22 18:29:31.708875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.722936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.722997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:20.015 [2024-07-22 18:29:31.723017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.975 ms 00:22:20.015 [2024-07-22 18:29:31.723035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.731754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.731824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:20.015 [2024-07-22 18:29:31.731846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.664 ms 00:22:20.015 [2024-07-22 18:29:31.731861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.732050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.732076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:20.015 [2024-07-22 18:29:31.732091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:20.015 [2024-07-22 18:29:31.732119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.745536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.745593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:20.015 [2024-07-22 18:29:31.745612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.389 ms 00:22:20.015 [2024-07-22 18:29:31.745628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.758101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.758152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:20.015 [2024-07-22 18:29:31.758174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.421 ms 00:22:20.015 [2024-07-22 18:29:31.758205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.770366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.770416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:20.015 [2024-07-22 18:29:31.770435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.100 ms 00:22:20.015 [2024-07-22 18:29:31.770449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.782479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.015 [2024-07-22 18:29:31.782529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:20.015 [2024-07-22 18:29:31.782548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.944 ms 00:22:20.015 [2024-07-22 18:29:31.782562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.015 [2024-07-22 18:29:31.782609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:20.015 [2024-07-22 18:29:31.782638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:20.015 [2024-07-22 18:29:31.782900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.782914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.782926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.782948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.782960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.782977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.782989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.783987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.784003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.784016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.784031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.784043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.784058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.784070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:20.016 [2024-07-22 18:29:31.784094] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:20.016 [2024-07-22 18:29:31.784111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133f7ce8-c691-4f98-9963-df4bc40dc329 00:22:20.016 [2024-07-22 18:29:31.784129] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:20.016 [2024-07-22 18:29:31.784140] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:20.017 [2024-07-22 18:29:31.784154] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:20.017 [2024-07-22 18:29:31.784166] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:20.017 [2024-07-22 18:29:31.784180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:20.017 [2024-07-22 18:29:31.784192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:20.017 [2024-07-22 18:29:31.784206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:20.017 [2024-07-22 18:29:31.784217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:20.017 [2024-07-22 18:29:31.784245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:20.017 [2024-07-22 18:29:31.784257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.017 [2024-07-22 18:29:31.784271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:20.017 [2024-07-22 18:29:31.784284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:22:20.017 [2024-07-22 18:29:31.784298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.017 [2024-07-22 18:29:31.801191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.017 [2024-07-22 18:29:31.801250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:20.017 [2024-07-22 18:29:31.801270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.846 ms 00:22:20.017 [2024-07-22 18:29:31.801288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.017 [2024-07-22 18:29:31.801816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.017 [2024-07-22 18:29:31.801847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:20.017 [2024-07-22 18:29:31.801866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:22:20.017 [2024-07-22 18:29:31.801884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.017 [2024-07-22 18:29:31.860678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.017 [2024-07-22 18:29:31.860822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.017 [2024-07-22 18:29:31.860859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.017 [2024-07-22 18:29:31.860886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.017 [2024-07-22 18:29:31.861122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.017 [2024-07-22 18:29:31.861162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.017 [2024-07-22 18:29:31.861188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.017 [2024-07-22 18:29:31.861223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.017 [2024-07-22 18:29:31.861330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.017 [2024-07-22 18:29:31.861367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.017 [2024-07-22 18:29:31.861389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.017 [2024-07-22 18:29:31.861416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.017 [2024-07-22 18:29:31.861458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.017 [2024-07-22 18:29:31.861480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.017 [2024-07-22 18:29:31.861493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.017 [2024-07-22 18:29:31.861506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.017 [2024-07-22 18:29:31.969224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.017 [2024-07-22 18:29:31.969306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:20.017 [2024-07-22 18:29:31.969328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.017 [2024-07-22 18:29:31.969344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.054699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.274 [2024-07-22 18:29:32.054778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:20.274 [2024-07-22 18:29:32.054799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.274 [2024-07-22 18:29:32.054815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.054929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.274 [2024-07-22 18:29:32.054952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:20.274 [2024-07-22 18:29:32.054966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.274 [2024-07-22 18:29:32.054986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.055026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.274 [2024-07-22 18:29:32.055043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:20.274 [2024-07-22 18:29:32.055056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.274 [2024-07-22 18:29:32.055069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.055205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.274 [2024-07-22 18:29:32.055228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:20.274 [2024-07-22 18:29:32.055242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.274 [2024-07-22 18:29:32.055255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.055308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.274 [2024-07-22 18:29:32.055337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:20.274 [2024-07-22 18:29:32.055351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.274 [2024-07-22 18:29:32.055365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.055429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.274 [2024-07-22 18:29:32.055454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:20.274 [2024-07-22 18:29:32.055466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.274 [2024-07-22 18:29:32.055483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.055541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.274 [2024-07-22 18:29:32.055562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:20.274 [2024-07-22 18:29:32.055575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.274 [2024-07-22 18:29:32.055589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.274 [2024-07-22 18:29:32.055780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.597 ms, result 0 00:22:21.208 18:29:33 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:21.208 [2024-07-22 18:29:33.152054] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:21.208 [2024-07-22 18:29:33.152239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81309 ] 00:22:21.465 [2024-07-22 18:29:33.325749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.722 [2024-07-22 18:29:33.557973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.981 [2024-07-22 18:29:33.901395] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.981 [2024-07-22 18:29:33.901491] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:22.241 [2024-07-22 18:29:34.065671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.065755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:22.241 [2024-07-22 18:29:34.065777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:22.241 [2024-07-22 18:29:34.065789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.069082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.069127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:22.241 [2024-07-22 18:29:34.069144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:22:22.241 [2024-07-22 18:29:34.069157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.069278] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:22.241 [2024-07-22 18:29:34.070206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:22.241 [2024-07-22 18:29:34.070248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.070263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:22.241 [2024-07-22 18:29:34.070277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:22:22.241 [2024-07-22 18:29:34.070289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.072264] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:22.241 [2024-07-22 18:29:34.089210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.089266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:22.241 [2024-07-22 18:29:34.089308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.945 ms 00:22:22.241 [2024-07-22 18:29:34.089322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.089481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.089504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:22.241 [2024-07-22 18:29:34.089518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:22.241 [2024-07-22 18:29:34.089530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.098745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.098818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:22.241 [2024-07-22 18:29:34.098837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.147 ms 00:22:22.241 [2024-07-22 18:29:34.098849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.099017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.099041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:22.241 [2024-07-22 18:29:34.099055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:22.241 [2024-07-22 18:29:34.099067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.099121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.099139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:22.241 [2024-07-22 18:29:34.099157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:22.241 [2024-07-22 18:29:34.099169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.099209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:22.241 [2024-07-22 18:29:34.104875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.104925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:22.241 [2024-07-22 18:29:34.104944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.676 ms 00:22:22.241 [2024-07-22 18:29:34.104957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.105081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.105101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:22.241 [2024-07-22 18:29:34.105115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:22.241 [2024-07-22 18:29:34.105127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.105160] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:22.241 [2024-07-22 18:29:34.105203] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:22.241 [2024-07-22 18:29:34.105255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:22.241 [2024-07-22 18:29:34.105277] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:22.241 [2024-07-22 18:29:34.105383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:22.241 [2024-07-22 18:29:34.105399] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:22.241 [2024-07-22 18:29:34.105415] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:22.241 [2024-07-22 18:29:34.105431] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:22.241 [2024-07-22 18:29:34.105445] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:22.241 [2024-07-22 18:29:34.105458] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:22.241 [2024-07-22 18:29:34.105475] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:22.241 [2024-07-22 18:29:34.105486] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:22.241 [2024-07-22 18:29:34.105498] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:22.241 [2024-07-22 18:29:34.105511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.105523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:22.241 [2024-07-22 18:29:34.105535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:22:22.241 [2024-07-22 18:29:34.105547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.105646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.241 [2024-07-22 18:29:34.105662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:22.241 [2024-07-22 18:29:34.105675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:22.241 [2024-07-22 18:29:34.105719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.241 [2024-07-22 18:29:34.105832] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:22.241 [2024-07-22 18:29:34.105851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:22.241 [2024-07-22 18:29:34.105864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:22.241 [2024-07-22 18:29:34.105876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.241 [2024-07-22 18:29:34.105888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:22.241 [2024-07-22 18:29:34.105899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:22.241 [2024-07-22 18:29:34.105910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:22.241 [2024-07-22 18:29:34.105921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:22.241 [2024-07-22 18:29:34.105931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:22.241 [2024-07-22 18:29:34.105941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:22.241 [2024-07-22 18:29:34.105952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:22.241 [2024-07-22 18:29:34.105963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:22.241 [2024-07-22 18:29:34.105973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:22.241 [2024-07-22 18:29:34.105983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:22.241 [2024-07-22 18:29:34.105995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:22.241 [2024-07-22 18:29:34.106005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.241 [2024-07-22 18:29:34.106017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:22.241 [2024-07-22 18:29:34.106028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:22.242 [2024-07-22 18:29:34.106054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:22.242 [2024-07-22 18:29:34.106077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.242 [2024-07-22 18:29:34.106098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:22.242 [2024-07-22 18:29:34.106109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.242 [2024-07-22 18:29:34.106130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:22.242 [2024-07-22 18:29:34.106141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.242 [2024-07-22 18:29:34.106163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:22.242 [2024-07-22 18:29:34.106178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.242 [2024-07-22 18:29:34.106212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:22.242 [2024-07-22 18:29:34.106223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:22.242 [2024-07-22 18:29:34.106245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:22.242 [2024-07-22 18:29:34.106255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:22.242 [2024-07-22 18:29:34.106266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:22.242 [2024-07-22 18:29:34.106276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:22.242 [2024-07-22 18:29:34.106287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:22.242 [2024-07-22 18:29:34.106298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:22.242 [2024-07-22 18:29:34.106318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:22.242 [2024-07-22 18:29:34.106329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106339] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:22.242 [2024-07-22 18:29:34.106350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:22.242 [2024-07-22 18:29:34.106362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:22.242 [2024-07-22 18:29:34.106373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.242 [2024-07-22 18:29:34.106388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:22.242 [2024-07-22 18:29:34.106401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:22.242 [2024-07-22 18:29:34.106412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:22.242 [2024-07-22 18:29:34.106424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:22.242 [2024-07-22 18:29:34.106434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:22.242 [2024-07-22 18:29:34.106445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:22.242 [2024-07-22 18:29:34.106458] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:22.242 [2024-07-22 18:29:34.106479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:22.242 [2024-07-22 18:29:34.106492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:22.242 [2024-07-22 18:29:34.106504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:22.242 [2024-07-22 18:29:34.106515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:22.242 [2024-07-22 18:29:34.106527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:22.242 [2024-07-22 18:29:34.106539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:22.242 [2024-07-22 18:29:34.106551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:22.242 [2024-07-22 18:29:34.106562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:22.242 [2024-07-22 18:29:34.106574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:22.242 [2024-07-22 18:29:34.106585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:22.242 [2024-07-22 18:29:34.106597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:22.242 [2024-07-22 18:29:34.106608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:22.242 [2024-07-22 18:29:34.106620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:22.242 [2024-07-22 18:29:34.106631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:22.242 [2024-07-22 18:29:34.106643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:22.242 [2024-07-22 18:29:34.106655] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:22.242 [2024-07-22 18:29:34.106668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:22.242 [2024-07-22 18:29:34.106697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:22.242 [2024-07-22 18:29:34.106711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:22.242 [2024-07-22 18:29:34.106723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:22.242 [2024-07-22 18:29:34.106735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:22.242 [2024-07-22 18:29:34.106748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.106761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:22.242 [2024-07-22 18:29:34.106773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:22:22.242 [2024-07-22 18:29:34.106784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.242 [2024-07-22 18:29:34.167923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.167985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:22.242 [2024-07-22 18:29:34.168007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.054 ms 00:22:22.242 [2024-07-22 18:29:34.168026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.242 [2024-07-22 18:29:34.168234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.168256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:22.242 [2024-07-22 18:29:34.168277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:22.242 [2024-07-22 18:29:34.168289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.242 [2024-07-22 18:29:34.212920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.212984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.242 [2024-07-22 18:29:34.213005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.594 ms 00:22:22.242 [2024-07-22 18:29:34.213018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.242 [2024-07-22 18:29:34.213156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.213176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.242 [2024-07-22 18:29:34.213190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:22.242 [2024-07-22 18:29:34.213202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.242 [2024-07-22 18:29:34.213818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.213840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.242 [2024-07-22 18:29:34.213864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:22:22.242 [2024-07-22 18:29:34.213876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.242 [2024-07-22 18:29:34.214057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.214097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.242 [2024-07-22 18:29:34.214111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:22:22.242 [2024-07-22 18:29:34.214123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.242 [2024-07-22 18:29:34.232912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.242 [2024-07-22 18:29:34.232964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.243 [2024-07-22 18:29:34.232983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.755 ms 00:22:22.243 [2024-07-22 18:29:34.232996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.243 [2024-07-22 18:29:34.249994] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:22.243 [2024-07-22 18:29:34.250042] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:22.243 [2024-07-22 18:29:34.250063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.243 [2024-07-22 18:29:34.250076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:22.243 [2024-07-22 18:29:34.250091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.889 ms 00:22:22.243 [2024-07-22 18:29:34.250103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.501 [2024-07-22 18:29:34.279375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.501 [2024-07-22 18:29:34.279457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:22.501 [2024-07-22 18:29:34.279477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.163 ms 00:22:22.501 [2024-07-22 18:29:34.279490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.501 [2024-07-22 18:29:34.297124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.501 [2024-07-22 18:29:34.297180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:22.501 [2024-07-22 18:29:34.297206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.523 ms 00:22:22.501 [2024-07-22 18:29:34.297224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.314005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.314061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:22.502 [2024-07-22 18:29:34.314081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.660 ms 00:22:22.502 [2024-07-22 18:29:34.314094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.315078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.315122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:22.502 [2024-07-22 18:29:34.315139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:22:22.502 [2024-07-22 18:29:34.315151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.396070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.396156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:22.502 [2024-07-22 18:29:34.396183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.873 ms 00:22:22.502 [2024-07-22 18:29:34.396204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.410384] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:22.502 [2024-07-22 18:29:34.432645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.432736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:22.502 [2024-07-22 18:29:34.432759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.272 ms 00:22:22.502 [2024-07-22 18:29:34.432772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.432930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.432952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:22.502 [2024-07-22 18:29:34.432972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:22.502 [2024-07-22 18:29:34.432984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.433061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.433079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:22.502 [2024-07-22 18:29:34.433092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:22.502 [2024-07-22 18:29:34.433105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.433140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.433155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:22.502 [2024-07-22 18:29:34.433170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:22.502 [2024-07-22 18:29:34.433200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.433249] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:22.502 [2024-07-22 18:29:34.433267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.433279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:22.502 [2024-07-22 18:29:34.433292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:22.502 [2024-07-22 18:29:34.433304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.464820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.464877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:22.502 [2024-07-22 18:29:34.464905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.476 ms 00:22:22.502 [2024-07-22 18:29:34.464918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.465063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.502 [2024-07-22 18:29:34.465085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:22.502 [2024-07-22 18:29:34.465099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:22.502 [2024-07-22 18:29:34.465111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.502 [2024-07-22 18:29:34.466290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:22.502 [2024-07-22 18:29:34.470434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 400.250 ms, result 0 00:22:22.502 [2024-07-22 18:29:34.471233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:22.502 [2024-07-22 18:29:34.487182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:32.999  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (24 MBps) Copying: 76/256 [MB] (25 MBps) Copying: 102/256 [MB] (25 MBps) Copying: 128/256 [MB] (26 MBps) Copying: 153/256 [MB] (25 MBps) Copying: 179/256 [MB] (25 MBps) Copying: 204/256 [MB] (25 MBps) Copying: 229/256 [MB] (25 MBps) Copying: 255/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-22 18:29:44.794139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:32.999 [2024-07-22 18:29:44.812773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.999 [2024-07-22 18:29:44.812827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:32.999 [2024-07-22 18:29:44.812849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:32.999 [2024-07-22 18:29:44.812862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.999 [2024-07-22 18:29:44.812897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:32.999 [2024-07-22 18:29:44.816501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.999 [2024-07-22 18:29:44.816547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:32.999 [2024-07-22 18:29:44.816563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:22:32.999 [2024-07-22 18:29:44.816576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.999 [2024-07-22 18:29:44.816899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.999 [2024-07-22 18:29:44.816918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:32.999 [2024-07-22 18:29:44.816932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:22:32.999 [2024-07-22 18:29:44.816944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.999 [2024-07-22 18:29:44.820583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.999 [2024-07-22 18:29:44.820615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:33.000 [2024-07-22 18:29:44.820630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:22:33.000 [2024-07-22 18:29:44.820649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.000 [2024-07-22 18:29:44.827939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.000 [2024-07-22 18:29:44.827972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:33.000 [2024-07-22 18:29:44.827986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.264 ms 00:22:33.000 [2024-07-22 18:29:44.827998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.000 [2024-07-22 18:29:44.859317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.000 [2024-07-22 18:29:44.859399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:33.000 [2024-07-22 18:29:44.859423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.232 ms 00:22:33.000 [2024-07-22 18:29:44.859436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.000 [2024-07-22 18:29:44.877545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.000 [2024-07-22 18:29:44.877604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:33.000 [2024-07-22 18:29:44.877626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.014 ms 00:22:33.000 [2024-07-22 18:29:44.877639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.000 [2024-07-22 18:29:44.877864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.000 [2024-07-22 18:29:44.877887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:33.000 [2024-07-22 18:29:44.877901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:33.000 [2024-07-22 18:29:44.877914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.000 [2024-07-22 18:29:44.920371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.000 [2024-07-22 18:29:44.920503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:33.000 [2024-07-22 18:29:44.920539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.419 ms 00:22:33.000 [2024-07-22 18:29:44.920563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.000 [2024-07-22 18:29:44.958912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.000 [2024-07-22 18:29:44.958980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:33.000 [2024-07-22 18:29:44.959002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.095 ms 00:22:33.000 [2024-07-22 18:29:44.959014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.000 [2024-07-22 18:29:44.988631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.000 [2024-07-22 18:29:44.988674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:33.000 [2024-07-22 18:29:44.988707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.541 ms 00:22:33.000 [2024-07-22 18:29:44.988720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.260 [2024-07-22 18:29:45.021407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.260 [2024-07-22 18:29:45.021457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:33.260 [2024-07-22 18:29:45.021476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.582 ms 00:22:33.260 [2024-07-22 18:29:45.021488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.260 [2024-07-22 18:29:45.021563] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:33.260 [2024-07-22 18:29:45.021590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.021993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:33.260 [2024-07-22 18:29:45.022471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:33.261 [2024-07-22 18:29:45.022903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:33.261 [2024-07-22 18:29:45.022917] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133f7ce8-c691-4f98-9963-df4bc40dc329 00:22:33.261 [2024-07-22 18:29:45.022930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:33.261 [2024-07-22 18:29:45.022942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:33.261 [2024-07-22 18:29:45.022968] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:33.261 [2024-07-22 18:29:45.022980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:33.261 [2024-07-22 18:29:45.022992] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:33.261 [2024-07-22 18:29:45.023004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:33.261 [2024-07-22 18:29:45.023016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:33.261 [2024-07-22 18:29:45.023026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:33.261 [2024-07-22 18:29:45.023037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:33.261 [2024-07-22 18:29:45.023055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.261 [2024-07-22 18:29:45.023068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:33.261 [2024-07-22 18:29:45.023081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:22:33.261 [2024-07-22 18:29:45.023098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.039848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.261 [2024-07-22 18:29:45.039890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:33.261 [2024-07-22 18:29:45.039908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.720 ms 00:22:33.261 [2024-07-22 18:29:45.039920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.040409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.261 [2024-07-22 18:29:45.040433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:33.261 [2024-07-22 18:29:45.040456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:22:33.261 [2024-07-22 18:29:45.040468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.081280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.081343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:33.261 [2024-07-22 18:29:45.081362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.081375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.081519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.081542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:33.261 [2024-07-22 18:29:45.081564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.081575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.081643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.081662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:33.261 [2024-07-22 18:29:45.081675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.081709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.081736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.081751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:33.261 [2024-07-22 18:29:45.081763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.081782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.186731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.186804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:33.261 [2024-07-22 18:29:45.186824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.186837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.272169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.272231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:33.261 [2024-07-22 18:29:45.272259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.272272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.272365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.272383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:33.261 [2024-07-22 18:29:45.272396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.272409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.272447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.272461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:33.261 [2024-07-22 18:29:45.272473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.272485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.272614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.272634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:33.261 [2024-07-22 18:29:45.272647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.272659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.272746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.272773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:33.261 [2024-07-22 18:29:45.272788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.272799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.261 [2024-07-22 18:29:45.272857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.261 [2024-07-22 18:29:45.272875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:33.261 [2024-07-22 18:29:45.272887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.261 [2024-07-22 18:29:45.272899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.530 [2024-07-22 18:29:45.272955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.530 [2024-07-22 18:29:45.272972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:33.530 [2024-07-22 18:29:45.272985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.530 [2024-07-22 18:29:45.272996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.530 [2024-07-22 18:29:45.273174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.401 ms, result 0 00:22:34.463 00:22:34.463 00:22:34.463 18:29:46 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:35.029 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:35.029 18:29:47 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:35.029 18:29:47 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:35.029 18:29:47 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:35.029 18:29:47 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:35.029 18:29:47 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:35.288 18:29:47 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:35.288 18:29:47 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81242 00:22:35.288 18:29:47 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81242 ']' 00:22:35.288 18:29:47 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81242 00:22:35.288 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81242) - No such process 00:22:35.288 Process with pid 81242 is not found 00:22:35.288 18:29:47 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 81242 is not found' 00:22:35.288 ************************************ 00:22:35.288 END TEST ftl_trim 00:22:35.288 ************************************ 00:22:35.288 00:22:35.288 real 1m11.315s 00:22:35.288 user 1m36.559s 00:22:35.288 sys 0m7.642s 00:22:35.288 18:29:47 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.288 18:29:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:35.288 18:29:47 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:35.288 18:29:47 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:35.288 18:29:47 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:35.288 18:29:47 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.288 18:29:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:35.288 ************************************ 00:22:35.288 START TEST ftl_restore 00:22:35.288 ************************************ 00:22:35.288 18:29:47 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:35.288 * Looking for test storage... 00:22:35.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:35.288 18:29:47 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.62n2MGesEx 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=81523 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:35.289 18:29:47 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 81523 00:22:35.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.289 18:29:47 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 81523 ']' 00:22:35.289 18:29:47 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.289 18:29:47 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.289 18:29:47 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.289 18:29:47 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.289 18:29:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:35.547 [2024-07-22 18:29:47.371718] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:35.547 [2024-07-22 18:29:47.372536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81523 ] 00:22:35.547 [2024-07-22 18:29:47.551022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.114 [2024-07-22 18:29:47.833803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.682 18:29:48 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.682 18:29:48 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:22:36.682 18:29:48 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:36.682 18:29:48 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:36.682 18:29:48 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:36.682 18:29:48 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:36.682 18:29:48 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:36.682 18:29:48 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:37.249 18:29:49 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:37.249 18:29:49 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:37.249 18:29:49 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:37.249 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:37.249 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:37.249 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:37.249 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:37.249 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:37.508 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:37.508 { 00:22:37.508 "name": "nvme0n1", 00:22:37.508 "aliases": [ 00:22:37.508 "13a3746e-f502-40ca-9414-44ab360c9a65" 00:22:37.508 ], 00:22:37.508 "product_name": "NVMe disk", 00:22:37.508 "block_size": 4096, 00:22:37.508 "num_blocks": 1310720, 00:22:37.508 "uuid": "13a3746e-f502-40ca-9414-44ab360c9a65", 00:22:37.508 "assigned_rate_limits": { 00:22:37.508 "rw_ios_per_sec": 0, 00:22:37.508 "rw_mbytes_per_sec": 0, 00:22:37.508 "r_mbytes_per_sec": 0, 00:22:37.508 "w_mbytes_per_sec": 0 00:22:37.508 }, 00:22:37.508 "claimed": true, 00:22:37.508 "claim_type": "read_many_write_one", 00:22:37.508 "zoned": false, 00:22:37.508 "supported_io_types": { 00:22:37.508 "read": true, 00:22:37.508 "write": true, 00:22:37.508 "unmap": true, 00:22:37.508 "flush": true, 00:22:37.508 "reset": true, 00:22:37.508 "nvme_admin": true, 00:22:37.508 "nvme_io": true, 00:22:37.508 "nvme_io_md": false, 00:22:37.508 "write_zeroes": true, 00:22:37.508 "zcopy": false, 00:22:37.508 "get_zone_info": false, 00:22:37.508 "zone_management": false, 00:22:37.508 "zone_append": false, 00:22:37.508 "compare": true, 00:22:37.508 "compare_and_write": false, 00:22:37.508 "abort": true, 00:22:37.508 "seek_hole": false, 00:22:37.508 "seek_data": false, 00:22:37.508 "copy": true, 00:22:37.508 "nvme_iov_md": false 00:22:37.508 }, 00:22:37.508 "driver_specific": { 00:22:37.508 "nvme": [ 00:22:37.508 { 00:22:37.508 "pci_address": "0000:00:11.0", 00:22:37.508 "trid": { 00:22:37.508 "trtype": "PCIe", 00:22:37.508 "traddr": "0000:00:11.0" 00:22:37.508 }, 00:22:37.508 "ctrlr_data": { 00:22:37.508 "cntlid": 0, 00:22:37.508 "vendor_id": "0x1b36", 00:22:37.508 "model_number": "QEMU NVMe Ctrl", 00:22:37.508 "serial_number": "12341", 00:22:37.508 "firmware_revision": "8.0.0", 00:22:37.508 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:37.508 "oacs": { 00:22:37.508 "security": 0, 00:22:37.508 "format": 1, 00:22:37.508 "firmware": 0, 00:22:37.508 "ns_manage": 1 00:22:37.508 }, 00:22:37.508 "multi_ctrlr": false, 00:22:37.508 "ana_reporting": false 00:22:37.508 }, 00:22:37.508 "vs": { 00:22:37.508 "nvme_version": "1.4" 00:22:37.508 }, 00:22:37.508 "ns_data": { 00:22:37.508 "id": 1, 00:22:37.508 "can_share": false 00:22:37.508 } 00:22:37.508 } 00:22:37.508 ], 00:22:37.508 "mp_policy": "active_passive" 00:22:37.508 } 00:22:37.508 } 00:22:37.508 ]' 00:22:37.508 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:37.508 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:37.508 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:37.508 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:37.508 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:37.508 18:29:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:22:37.508 18:29:49 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:37.508 18:29:49 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:37.508 18:29:49 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:37.508 18:29:49 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:37.508 18:29:49 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:37.767 18:29:49 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=4f09dec7-2ace-490a-84ad-13a2ed981edf 00:22:37.767 18:29:49 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:37.767 18:29:49 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f09dec7-2ace-490a-84ad-13a2ed981edf 00:22:38.025 18:29:49 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:38.284 18:29:50 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=78f0f95e-37bf-4cc2-85b6-7bf5a3524ec3 00:22:38.284 18:29:50 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 78f0f95e-37bf-4cc2-85b6-7bf5a3524ec3 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:38.542 18:29:50 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:38.542 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:38.542 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:38.542 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:38.543 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:38.543 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:38.801 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:38.801 { 00:22:38.801 "name": "08bc0f67-8ec3-48f6-b685-204c4cfdada1", 00:22:38.801 "aliases": [ 00:22:38.801 "lvs/nvme0n1p0" 00:22:38.801 ], 00:22:38.801 "product_name": "Logical Volume", 00:22:38.801 "block_size": 4096, 00:22:38.801 "num_blocks": 26476544, 00:22:38.801 "uuid": "08bc0f67-8ec3-48f6-b685-204c4cfdada1", 00:22:38.801 "assigned_rate_limits": { 00:22:38.801 "rw_ios_per_sec": 0, 00:22:38.801 "rw_mbytes_per_sec": 0, 00:22:38.801 "r_mbytes_per_sec": 0, 00:22:38.801 "w_mbytes_per_sec": 0 00:22:38.801 }, 00:22:38.801 "claimed": false, 00:22:38.801 "zoned": false, 00:22:38.801 "supported_io_types": { 00:22:38.801 "read": true, 00:22:38.801 "write": true, 00:22:38.801 "unmap": true, 00:22:38.801 "flush": false, 00:22:38.801 "reset": true, 00:22:38.801 "nvme_admin": false, 00:22:38.801 "nvme_io": false, 00:22:38.801 "nvme_io_md": false, 00:22:38.801 "write_zeroes": true, 00:22:38.801 "zcopy": false, 00:22:38.801 "get_zone_info": false, 00:22:38.801 "zone_management": false, 00:22:38.801 "zone_append": false, 00:22:38.801 "compare": false, 00:22:38.801 "compare_and_write": false, 00:22:38.801 "abort": false, 00:22:38.801 "seek_hole": true, 00:22:38.801 "seek_data": true, 00:22:38.801 "copy": false, 00:22:38.801 "nvme_iov_md": false 00:22:38.801 }, 00:22:38.801 "driver_specific": { 00:22:38.801 "lvol": { 00:22:38.801 "lvol_store_uuid": "78f0f95e-37bf-4cc2-85b6-7bf5a3524ec3", 00:22:38.801 "base_bdev": "nvme0n1", 00:22:38.801 "thin_provision": true, 00:22:38.801 "num_allocated_clusters": 0, 00:22:38.801 "snapshot": false, 00:22:38.801 "clone": false, 00:22:38.801 "esnap_clone": false 00:22:38.801 } 00:22:38.801 } 00:22:38.801 } 00:22:38.801 ]' 00:22:38.801 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:38.801 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:38.801 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:39.060 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:39.060 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:39.060 18:29:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:39.060 18:29:50 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:39.060 18:29:50 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:39.060 18:29:50 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:39.318 18:29:51 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:39.318 18:29:51 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:39.318 18:29:51 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:39.318 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:39.318 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:39.318 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:39.318 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:39.318 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:39.577 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:39.577 { 00:22:39.577 "name": "08bc0f67-8ec3-48f6-b685-204c4cfdada1", 00:22:39.577 "aliases": [ 00:22:39.577 "lvs/nvme0n1p0" 00:22:39.577 ], 00:22:39.577 "product_name": "Logical Volume", 00:22:39.577 "block_size": 4096, 00:22:39.577 "num_blocks": 26476544, 00:22:39.577 "uuid": "08bc0f67-8ec3-48f6-b685-204c4cfdada1", 00:22:39.577 "assigned_rate_limits": { 00:22:39.577 "rw_ios_per_sec": 0, 00:22:39.577 "rw_mbytes_per_sec": 0, 00:22:39.577 "r_mbytes_per_sec": 0, 00:22:39.577 "w_mbytes_per_sec": 0 00:22:39.577 }, 00:22:39.577 "claimed": false, 00:22:39.577 "zoned": false, 00:22:39.577 "supported_io_types": { 00:22:39.577 "read": true, 00:22:39.577 "write": true, 00:22:39.577 "unmap": true, 00:22:39.577 "flush": false, 00:22:39.577 "reset": true, 00:22:39.577 "nvme_admin": false, 00:22:39.577 "nvme_io": false, 00:22:39.577 "nvme_io_md": false, 00:22:39.577 "write_zeroes": true, 00:22:39.577 "zcopy": false, 00:22:39.577 "get_zone_info": false, 00:22:39.577 "zone_management": false, 00:22:39.577 "zone_append": false, 00:22:39.577 "compare": false, 00:22:39.577 "compare_and_write": false, 00:22:39.577 "abort": false, 00:22:39.577 "seek_hole": true, 00:22:39.577 "seek_data": true, 00:22:39.577 "copy": false, 00:22:39.577 "nvme_iov_md": false 00:22:39.577 }, 00:22:39.577 "driver_specific": { 00:22:39.577 "lvol": { 00:22:39.577 "lvol_store_uuid": "78f0f95e-37bf-4cc2-85b6-7bf5a3524ec3", 00:22:39.577 "base_bdev": "nvme0n1", 00:22:39.577 "thin_provision": true, 00:22:39.577 "num_allocated_clusters": 0, 00:22:39.577 "snapshot": false, 00:22:39.577 "clone": false, 00:22:39.577 "esnap_clone": false 00:22:39.577 } 00:22:39.577 } 00:22:39.577 } 00:22:39.577 ]' 00:22:39.577 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:39.577 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:39.577 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:39.577 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:39.577 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:39.577 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:39.577 18:29:51 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:39.577 18:29:51 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:39.835 18:29:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:39.835 18:29:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:39.835 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:39.835 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:39.835 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:39.835 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:39.835 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08bc0f67-8ec3-48f6-b685-204c4cfdada1 00:22:40.093 18:29:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:40.093 { 00:22:40.093 "name": "08bc0f67-8ec3-48f6-b685-204c4cfdada1", 00:22:40.093 "aliases": [ 00:22:40.093 "lvs/nvme0n1p0" 00:22:40.093 ], 00:22:40.093 "product_name": "Logical Volume", 00:22:40.093 "block_size": 4096, 00:22:40.093 "num_blocks": 26476544, 00:22:40.093 "uuid": "08bc0f67-8ec3-48f6-b685-204c4cfdada1", 00:22:40.093 "assigned_rate_limits": { 00:22:40.093 "rw_ios_per_sec": 0, 00:22:40.093 "rw_mbytes_per_sec": 0, 00:22:40.093 "r_mbytes_per_sec": 0, 00:22:40.093 "w_mbytes_per_sec": 0 00:22:40.093 }, 00:22:40.093 "claimed": false, 00:22:40.093 "zoned": false, 00:22:40.093 "supported_io_types": { 00:22:40.093 "read": true, 00:22:40.093 "write": true, 00:22:40.093 "unmap": true, 00:22:40.093 "flush": false, 00:22:40.093 "reset": true, 00:22:40.093 "nvme_admin": false, 00:22:40.093 "nvme_io": false, 00:22:40.093 "nvme_io_md": false, 00:22:40.093 "write_zeroes": true, 00:22:40.093 "zcopy": false, 00:22:40.093 "get_zone_info": false, 00:22:40.093 "zone_management": false, 00:22:40.093 "zone_append": false, 00:22:40.093 "compare": false, 00:22:40.093 "compare_and_write": false, 00:22:40.093 "abort": false, 00:22:40.093 "seek_hole": true, 00:22:40.093 "seek_data": true, 00:22:40.093 "copy": false, 00:22:40.093 "nvme_iov_md": false 00:22:40.093 }, 00:22:40.093 "driver_specific": { 00:22:40.093 "lvol": { 00:22:40.093 "lvol_store_uuid": "78f0f95e-37bf-4cc2-85b6-7bf5a3524ec3", 00:22:40.093 "base_bdev": "nvme0n1", 00:22:40.093 "thin_provision": true, 00:22:40.093 "num_allocated_clusters": 0, 00:22:40.093 "snapshot": false, 00:22:40.093 "clone": false, 00:22:40.093 "esnap_clone": false 00:22:40.093 } 00:22:40.093 } 00:22:40.093 } 00:22:40.093 ]' 00:22:40.093 18:29:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:40.094 18:29:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:40.352 18:29:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:40.352 18:29:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:40.352 18:29:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:40.352 18:29:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:40.352 18:29:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:40.352 18:29:52 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 08bc0f67-8ec3-48f6-b685-204c4cfdada1 --l2p_dram_limit 10' 00:22:40.352 18:29:52 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:40.352 18:29:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:40.352 18:29:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:40.352 18:29:52 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:40.352 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:40.352 18:29:52 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 08bc0f67-8ec3-48f6-b685-204c4cfdada1 --l2p_dram_limit 10 -c nvc0n1p0 00:22:40.610 [2024-07-22 18:29:52.409199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.610 [2024-07-22 18:29:52.409271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:40.610 [2024-07-22 18:29:52.409295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:40.610 [2024-07-22 18:29:52.409311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.610 [2024-07-22 18:29:52.409394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.610 [2024-07-22 18:29:52.409417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.610 [2024-07-22 18:29:52.409431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:40.610 [2024-07-22 18:29:52.409446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.610 [2024-07-22 18:29:52.409478] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:40.610 [2024-07-22 18:29:52.410499] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:40.610 [2024-07-22 18:29:52.410541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.610 [2024-07-22 18:29:52.410565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.611 [2024-07-22 18:29:52.410579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:22:40.611 [2024-07-22 18:29:52.410598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.410740] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 84e247ec-79b0-422c-8ee5-87972b0ec164 00:22:40.611 [2024-07-22 18:29:52.412574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.412618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:40.611 [2024-07-22 18:29:52.412639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:40.611 [2024-07-22 18:29:52.412652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.422140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.422198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.611 [2024-07-22 18:29:52.422221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.397 ms 00:22:40.611 [2024-07-22 18:29:52.422234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.422372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.422392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.611 [2024-07-22 18:29:52.422409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:40.611 [2024-07-22 18:29:52.422422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.422518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.422537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:40.611 [2024-07-22 18:29:52.422553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:40.611 [2024-07-22 18:29:52.422568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.422607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:40.611 [2024-07-22 18:29:52.427993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.428060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.611 [2024-07-22 18:29:52.428079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.399 ms 00:22:40.611 [2024-07-22 18:29:52.428096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.428159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.428180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:40.611 [2024-07-22 18:29:52.428194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:40.611 [2024-07-22 18:29:52.428209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.428255] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:40.611 [2024-07-22 18:29:52.428423] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:40.611 [2024-07-22 18:29:52.428442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:40.611 [2024-07-22 18:29:52.428471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:40.611 [2024-07-22 18:29:52.428488] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:40.611 [2024-07-22 18:29:52.428506] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:40.611 [2024-07-22 18:29:52.428520] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:40.611 [2024-07-22 18:29:52.428534] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:40.611 [2024-07-22 18:29:52.428549] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:40.611 [2024-07-22 18:29:52.428564] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:40.611 [2024-07-22 18:29:52.428577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.428591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:40.611 [2024-07-22 18:29:52.428604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:22:40.611 [2024-07-22 18:29:52.428618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.428733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.611 [2024-07-22 18:29:52.428755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:40.611 [2024-07-22 18:29:52.428768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:40.611 [2024-07-22 18:29:52.428783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.611 [2024-07-22 18:29:52.428897] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:40.611 [2024-07-22 18:29:52.428920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:40.611 [2024-07-22 18:29:52.428946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:40.611 [2024-07-22 18:29:52.428962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.611 [2024-07-22 18:29:52.428975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:40.611 [2024-07-22 18:29:52.428988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:40.611 [2024-07-22 18:29:52.429025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:40.611 [2024-07-22 18:29:52.429050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:40.611 [2024-07-22 18:29:52.429064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:40.611 [2024-07-22 18:29:52.429076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:40.611 [2024-07-22 18:29:52.429091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:40.611 [2024-07-22 18:29:52.429103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:40.611 [2024-07-22 18:29:52.429117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:40.611 [2024-07-22 18:29:52.429145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:40.611 [2024-07-22 18:29:52.429181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:40.611 [2024-07-22 18:29:52.429219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:40.611 [2024-07-22 18:29:52.429254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:40.611 [2024-07-22 18:29:52.429292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:40.611 [2024-07-22 18:29:52.429337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:40.611 [2024-07-22 18:29:52.429365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:40.611 [2024-07-22 18:29:52.429379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:40.611 [2024-07-22 18:29:52.429390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:40.611 [2024-07-22 18:29:52.429404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:40.611 [2024-07-22 18:29:52.429415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:40.611 [2024-07-22 18:29:52.429430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:40.611 [2024-07-22 18:29:52.429455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:40.611 [2024-07-22 18:29:52.429466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429479] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:40.611 [2024-07-22 18:29:52.429491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:40.611 [2024-07-22 18:29:52.429505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.611 [2024-07-22 18:29:52.429532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:40.611 [2024-07-22 18:29:52.429544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:40.611 [2024-07-22 18:29:52.429560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:40.611 [2024-07-22 18:29:52.429572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:40.611 [2024-07-22 18:29:52.429585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:40.611 [2024-07-22 18:29:52.429597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:40.611 [2024-07-22 18:29:52.429616] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:40.611 [2024-07-22 18:29:52.429631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:40.611 [2024-07-22 18:29:52.429651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:40.611 [2024-07-22 18:29:52.429664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:40.611 [2024-07-22 18:29:52.429702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:40.612 [2024-07-22 18:29:52.429718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:40.612 [2024-07-22 18:29:52.429734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:40.612 [2024-07-22 18:29:52.429747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:40.612 [2024-07-22 18:29:52.429761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:40.612 [2024-07-22 18:29:52.429787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:40.612 [2024-07-22 18:29:52.429806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:40.612 [2024-07-22 18:29:52.429819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:40.612 [2024-07-22 18:29:52.429836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:40.612 [2024-07-22 18:29:52.429848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:40.612 [2024-07-22 18:29:52.429863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:40.612 [2024-07-22 18:29:52.429875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:40.612 [2024-07-22 18:29:52.429890] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:40.612 [2024-07-22 18:29:52.429903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:40.612 [2024-07-22 18:29:52.429919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:40.612 [2024-07-22 18:29:52.429931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:40.612 [2024-07-22 18:29:52.429945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:40.612 [2024-07-22 18:29:52.429957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:40.612 [2024-07-22 18:29:52.429973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.612 [2024-07-22 18:29:52.429985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:40.612 [2024-07-22 18:29:52.430000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:22:40.612 [2024-07-22 18:29:52.430012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.612 [2024-07-22 18:29:52.430074] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:40.612 [2024-07-22 18:29:52.430091] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:43.143 [2024-07-22 18:29:54.808021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.808101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:43.143 [2024-07-22 18:29:54.808128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2377.932 ms 00:22:43.143 [2024-07-22 18:29:54.808143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.847455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.847544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:43.143 [2024-07-22 18:29:54.847583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.024 ms 00:22:43.143 [2024-07-22 18:29:54.847601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.847829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.847852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:43.143 [2024-07-22 18:29:54.847871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:43.143 [2024-07-22 18:29:54.847888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.891121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.891183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:43.143 [2024-07-22 18:29:54.891208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.169 ms 00:22:43.143 [2024-07-22 18:29:54.891222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.891284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.891308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:43.143 [2024-07-22 18:29:54.891324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:43.143 [2024-07-22 18:29:54.891337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.892026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.892055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:43.143 [2024-07-22 18:29:54.892074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:22:43.143 [2024-07-22 18:29:54.892086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.892254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.892279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:43.143 [2024-07-22 18:29:54.892300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:22:43.143 [2024-07-22 18:29:54.892312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.912788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.912859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:43.143 [2024-07-22 18:29:54.912884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.440 ms 00:22:43.143 [2024-07-22 18:29:54.912898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:54.927472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:43.143 [2024-07-22 18:29:54.931629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:54.931697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:43.143 [2024-07-22 18:29:54.931720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.591 ms 00:22:43.143 [2024-07-22 18:29:54.931736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:55.007898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:55.007991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:43.143 [2024-07-22 18:29:55.008015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.100 ms 00:22:43.143 [2024-07-22 18:29:55.008031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:55.008322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:55.008350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:43.143 [2024-07-22 18:29:55.008365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:22:43.143 [2024-07-22 18:29:55.008384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.143 [2024-07-22 18:29:55.039800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.143 [2024-07-22 18:29:55.039853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:43.144 [2024-07-22 18:29:55.039874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.327 ms 00:22:43.144 [2024-07-22 18:29:55.039890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.144 [2024-07-22 18:29:55.070336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.144 [2024-07-22 18:29:55.070407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:43.144 [2024-07-22 18:29:55.070431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.386 ms 00:22:43.144 [2024-07-22 18:29:55.070447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.144 [2024-07-22 18:29:55.071319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.144 [2024-07-22 18:29:55.071358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:43.144 [2024-07-22 18:29:55.071376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:22:43.144 [2024-07-22 18:29:55.071414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.402 [2024-07-22 18:29:55.159818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.402 [2024-07-22 18:29:55.159904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:43.402 [2024-07-22 18:29:55.159928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.313 ms 00:22:43.402 [2024-07-22 18:29:55.159950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.402 [2024-07-22 18:29:55.192490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.402 [2024-07-22 18:29:55.192554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:43.402 [2024-07-22 18:29:55.192577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.471 ms 00:22:43.402 [2024-07-22 18:29:55.192594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.402 [2024-07-22 18:29:55.223269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.402 [2024-07-22 18:29:55.223326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:43.402 [2024-07-22 18:29:55.223348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.617 ms 00:22:43.402 [2024-07-22 18:29:55.223363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.402 [2024-07-22 18:29:55.254735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.402 [2024-07-22 18:29:55.254804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:43.402 [2024-07-22 18:29:55.254826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.304 ms 00:22:43.402 [2024-07-22 18:29:55.254842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.402 [2024-07-22 18:29:55.254916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.402 [2024-07-22 18:29:55.254942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:43.402 [2024-07-22 18:29:55.254956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:43.402 [2024-07-22 18:29:55.254975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.402 [2024-07-22 18:29:55.255096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.402 [2024-07-22 18:29:55.255120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:43.403 [2024-07-22 18:29:55.255138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:43.403 [2024-07-22 18:29:55.255153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.403 [2024-07-22 18:29:55.256528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2846.807 ms, result 0 00:22:43.403 { 00:22:43.403 "name": "ftl0", 00:22:43.403 "uuid": "84e247ec-79b0-422c-8ee5-87972b0ec164" 00:22:43.403 } 00:22:43.403 18:29:55 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:43.403 18:29:55 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:43.661 18:29:55 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:43.661 18:29:55 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:43.920 [2024-07-22 18:29:55.831926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.832220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:43.920 [2024-07-22 18:29:55.832371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:43.920 [2024-07-22 18:29:55.832427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.832567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:43.920 [2024-07-22 18:29:55.836430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.836594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:43.920 [2024-07-22 18:29:55.836740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:22:43.920 [2024-07-22 18:29:55.836802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.837166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.837215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:43.920 [2024-07-22 18:29:55.837243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:22:43.920 [2024-07-22 18:29:55.837259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.840487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.840527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:43.920 [2024-07-22 18:29:55.840544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:22:43.920 [2024-07-22 18:29:55.840559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.847032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.847071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:43.920 [2024-07-22 18:29:55.847090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.445 ms 00:22:43.920 [2024-07-22 18:29:55.847105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.878446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.878500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:43.920 [2024-07-22 18:29:55.878519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.249 ms 00:22:43.920 [2024-07-22 18:29:55.878534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.896878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.896933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:43.920 [2024-07-22 18:29:55.896953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.291 ms 00:22:43.920 [2024-07-22 18:29:55.896969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.897166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.897197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:43.920 [2024-07-22 18:29:55.897213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:22:43.920 [2024-07-22 18:29:55.897227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.920 [2024-07-22 18:29:55.927662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.920 [2024-07-22 18:29:55.927725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:43.920 [2024-07-22 18:29:55.927745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.407 ms 00:22:43.920 [2024-07-22 18:29:55.927761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.181 [2024-07-22 18:29:55.958013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.181 [2024-07-22 18:29:55.958066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:44.181 [2024-07-22 18:29:55.958086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.199 ms 00:22:44.181 [2024-07-22 18:29:55.958101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.181 [2024-07-22 18:29:55.988098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.181 [2024-07-22 18:29:55.988153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:44.181 [2024-07-22 18:29:55.988173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.944 ms 00:22:44.181 [2024-07-22 18:29:55.988188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.181 [2024-07-22 18:29:56.018111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.181 [2024-07-22 18:29:56.018165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:44.181 [2024-07-22 18:29:56.018185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.797 ms 00:22:44.181 [2024-07-22 18:29:56.018200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.181 [2024-07-22 18:29:56.018253] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:44.181 [2024-07-22 18:29:56.018282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.018994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:44.181 [2024-07-22 18:29:56.019183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:44.182 [2024-07-22 18:29:56.019857] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:44.182 [2024-07-22 18:29:56.019874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84e247ec-79b0-422c-8ee5-87972b0ec164 00:22:44.182 [2024-07-22 18:29:56.019891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:44.182 [2024-07-22 18:29:56.019904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:44.182 [2024-07-22 18:29:56.019920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:44.182 [2024-07-22 18:29:56.019933] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:44.182 [2024-07-22 18:29:56.019947] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:44.182 [2024-07-22 18:29:56.019959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:44.182 [2024-07-22 18:29:56.019974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:44.182 [2024-07-22 18:29:56.019985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:44.182 [2024-07-22 18:29:56.019998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:44.182 [2024-07-22 18:29:56.020010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.182 [2024-07-22 18:29:56.020025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:44.182 [2024-07-22 18:29:56.020038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:22:44.182 [2024-07-22 18:29:56.020053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.182 [2024-07-22 18:29:56.037079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.182 [2024-07-22 18:29:56.037128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:44.182 [2024-07-22 18:29:56.037147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.953 ms 00:22:44.182 [2024-07-22 18:29:56.037163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.182 [2024-07-22 18:29:56.037634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.182 [2024-07-22 18:29:56.037671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:44.182 [2024-07-22 18:29:56.037706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:22:44.182 [2024-07-22 18:29:56.037730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.182 [2024-07-22 18:29:56.091264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.182 [2024-07-22 18:29:56.091347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.182 [2024-07-22 18:29:56.091369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.182 [2024-07-22 18:29:56.091399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.182 [2024-07-22 18:29:56.091527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.182 [2024-07-22 18:29:56.091549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.182 [2024-07-22 18:29:56.091564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.182 [2024-07-22 18:29:56.091583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.182 [2024-07-22 18:29:56.091734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.182 [2024-07-22 18:29:56.091761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.182 [2024-07-22 18:29:56.091776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.182 [2024-07-22 18:29:56.091792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.182 [2024-07-22 18:29:56.091821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.182 [2024-07-22 18:29:56.091841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.182 [2024-07-22 18:29:56.091854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.182 [2024-07-22 18:29:56.091868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.441 [2024-07-22 18:29:56.196018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.441 [2024-07-22 18:29:56.196098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.441 [2024-07-22 18:29:56.196118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.441 [2024-07-22 18:29:56.196134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.441 [2024-07-22 18:29:56.282209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.441 [2024-07-22 18:29:56.282299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.441 [2024-07-22 18:29:56.282321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.441 [2024-07-22 18:29:56.282341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.441 [2024-07-22 18:29:56.282462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.441 [2024-07-22 18:29:56.282486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.441 [2024-07-22 18:29:56.282500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.442 [2024-07-22 18:29:56.282515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.442 [2024-07-22 18:29:56.282582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.442 [2024-07-22 18:29:56.282609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.442 [2024-07-22 18:29:56.282623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.442 [2024-07-22 18:29:56.282637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.442 [2024-07-22 18:29:56.282795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.442 [2024-07-22 18:29:56.282820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.442 [2024-07-22 18:29:56.282834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.442 [2024-07-22 18:29:56.282849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.442 [2024-07-22 18:29:56.282904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.442 [2024-07-22 18:29:56.282927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:44.442 [2024-07-22 18:29:56.282941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.442 [2024-07-22 18:29:56.282956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.442 [2024-07-22 18:29:56.283011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.442 [2024-07-22 18:29:56.283030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.442 [2024-07-22 18:29:56.283043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.442 [2024-07-22 18:29:56.283057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.442 [2024-07-22 18:29:56.283115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.442 [2024-07-22 18:29:56.283140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.442 [2024-07-22 18:29:56.283154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.442 [2024-07-22 18:29:56.283169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.442 [2024-07-22 18:29:56.283339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.374 ms, result 0 00:22:44.442 true 00:22:44.442 18:29:56 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 81523 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81523 ']' 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81523 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81523 00:22:44.442 killing process with pid 81523 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81523' 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 81523 00:22:44.442 18:29:56 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 81523 00:22:49.708 18:30:01 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:54.974 262144+0 records in 00:22:54.974 262144+0 records out 00:22:54.974 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.83652 s, 222 MB/s 00:22:54.974 18:30:06 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:56.348 18:30:08 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:56.348 [2024-07-22 18:30:08.298606] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:56.348 [2024-07-22 18:30:08.299048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81771 ] 00:22:56.606 [2024-07-22 18:30:08.475950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.864 [2024-07-22 18:30:08.742615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.123 [2024-07-22 18:30:09.093338] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:57.123 [2024-07-22 18:30:09.093423] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:57.383 [2024-07-22 18:30:09.256654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.256742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:57.383 [2024-07-22 18:30:09.256764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:57.383 [2024-07-22 18:30:09.256776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.256861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.256884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:57.383 [2024-07-22 18:30:09.256898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:57.383 [2024-07-22 18:30:09.256915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.256948] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:57.383 [2024-07-22 18:30:09.257964] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:57.383 [2024-07-22 18:30:09.258001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.258021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:57.383 [2024-07-22 18:30:09.258034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:22:57.383 [2024-07-22 18:30:09.258046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.260024] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:57.383 [2024-07-22 18:30:09.277038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.277110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:57.383 [2024-07-22 18:30:09.277132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.035 ms 00:22:57.383 [2024-07-22 18:30:09.277144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.277259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.277282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:57.383 [2024-07-22 18:30:09.277301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:57.383 [2024-07-22 18:30:09.277313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.286565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.286630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:57.383 [2024-07-22 18:30:09.286649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.125 ms 00:22:57.383 [2024-07-22 18:30:09.286661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.286813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.286843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:57.383 [2024-07-22 18:30:09.286858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:57.383 [2024-07-22 18:30:09.286870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.286952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.286972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:57.383 [2024-07-22 18:30:09.286985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:57.383 [2024-07-22 18:30:09.287008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.287048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:57.383 [2024-07-22 18:30:09.292144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.292205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:57.383 [2024-07-22 18:30:09.292224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.107 ms 00:22:57.383 [2024-07-22 18:30:09.292236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.292298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.292318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:57.383 [2024-07-22 18:30:09.292332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:57.383 [2024-07-22 18:30:09.292343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.292425] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:57.383 [2024-07-22 18:30:09.292461] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:57.383 [2024-07-22 18:30:09.292505] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:57.383 [2024-07-22 18:30:09.292530] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:57.383 [2024-07-22 18:30:09.292636] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:57.383 [2024-07-22 18:30:09.292652] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:57.383 [2024-07-22 18:30:09.292667] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:57.383 [2024-07-22 18:30:09.292711] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:57.383 [2024-07-22 18:30:09.292729] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:57.383 [2024-07-22 18:30:09.292742] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:57.383 [2024-07-22 18:30:09.292753] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:57.383 [2024-07-22 18:30:09.292764] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:57.383 [2024-07-22 18:30:09.292776] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:57.383 [2024-07-22 18:30:09.292788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.292806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:57.383 [2024-07-22 18:30:09.292819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:22:57.383 [2024-07-22 18:30:09.292830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.292926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.383 [2024-07-22 18:30:09.292944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:57.383 [2024-07-22 18:30:09.292957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:57.383 [2024-07-22 18:30:09.292967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.383 [2024-07-22 18:30:09.293074] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:57.383 [2024-07-22 18:30:09.293093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:57.383 [2024-07-22 18:30:09.293112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:57.383 [2024-07-22 18:30:09.293124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.383 [2024-07-22 18:30:09.293135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:57.383 [2024-07-22 18:30:09.293146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:57.383 [2024-07-22 18:30:09.293156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:57.383 [2024-07-22 18:30:09.293167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:57.383 [2024-07-22 18:30:09.293177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:57.383 [2024-07-22 18:30:09.293187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:57.383 [2024-07-22 18:30:09.293198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:57.383 [2024-07-22 18:30:09.293209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:57.383 [2024-07-22 18:30:09.293228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:57.383 [2024-07-22 18:30:09.293238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:57.383 [2024-07-22 18:30:09.293248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:57.383 [2024-07-22 18:30:09.293258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.383 [2024-07-22 18:30:09.293269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:57.383 [2024-07-22 18:30:09.293281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:57.384 [2024-07-22 18:30:09.293291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:57.384 [2024-07-22 18:30:09.293326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.384 [2024-07-22 18:30:09.293348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:57.384 [2024-07-22 18:30:09.293358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.384 [2024-07-22 18:30:09.293378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:57.384 [2024-07-22 18:30:09.293388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.384 [2024-07-22 18:30:09.293409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:57.384 [2024-07-22 18:30:09.293419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.384 [2024-07-22 18:30:09.293440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:57.384 [2024-07-22 18:30:09.293451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:57.384 [2024-07-22 18:30:09.293472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:57.384 [2024-07-22 18:30:09.293482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:57.384 [2024-07-22 18:30:09.293492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:57.384 [2024-07-22 18:30:09.293503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:57.384 [2024-07-22 18:30:09.293513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:57.384 [2024-07-22 18:30:09.293523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:57.384 [2024-07-22 18:30:09.293543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:57.384 [2024-07-22 18:30:09.293553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293563] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:57.384 [2024-07-22 18:30:09.293574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:57.384 [2024-07-22 18:30:09.293584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:57.384 [2024-07-22 18:30:09.293595] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.384 [2024-07-22 18:30:09.293606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:57.384 [2024-07-22 18:30:09.293618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:57.384 [2024-07-22 18:30:09.293629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:57.384 [2024-07-22 18:30:09.293640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:57.384 [2024-07-22 18:30:09.293650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:57.384 [2024-07-22 18:30:09.293661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:57.384 [2024-07-22 18:30:09.293673] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:57.384 [2024-07-22 18:30:09.293705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:57.384 [2024-07-22 18:30:09.293720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:57.384 [2024-07-22 18:30:09.293731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:57.384 [2024-07-22 18:30:09.293748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:57.384 [2024-07-22 18:30:09.293760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:57.384 [2024-07-22 18:30:09.293771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:57.384 [2024-07-22 18:30:09.293782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:57.384 [2024-07-22 18:30:09.293793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:57.384 [2024-07-22 18:30:09.293805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:57.384 [2024-07-22 18:30:09.293817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:57.384 [2024-07-22 18:30:09.293829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:57.384 [2024-07-22 18:30:09.293840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:57.384 [2024-07-22 18:30:09.293852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:57.384 [2024-07-22 18:30:09.293863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:57.384 [2024-07-22 18:30:09.293875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:57.384 [2024-07-22 18:30:09.293887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:57.384 [2024-07-22 18:30:09.293899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:57.384 [2024-07-22 18:30:09.293911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:57.384 [2024-07-22 18:30:09.293923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:57.384 [2024-07-22 18:30:09.293935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:57.384 [2024-07-22 18:30:09.293947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:57.384 [2024-07-22 18:30:09.293959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.384 [2024-07-22 18:30:09.293978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:57.384 [2024-07-22 18:30:09.293991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:22:57.384 [2024-07-22 18:30:09.294002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.384 [2024-07-22 18:30:09.349763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.384 [2024-07-22 18:30:09.349839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:57.384 [2024-07-22 18:30:09.349880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.691 ms 00:22:57.384 [2024-07-22 18:30:09.349891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.384 [2024-07-22 18:30:09.350021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.384 [2024-07-22 18:30:09.350040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:57.384 [2024-07-22 18:30:09.350054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:57.384 [2024-07-22 18:30:09.350065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.384 [2024-07-22 18:30:09.392753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.384 [2024-07-22 18:30:09.392820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:57.384 [2024-07-22 18:30:09.392841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.588 ms 00:22:57.384 [2024-07-22 18:30:09.392853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.384 [2024-07-22 18:30:09.392930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.384 [2024-07-22 18:30:09.392949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.384 [2024-07-22 18:30:09.392963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:57.384 [2024-07-22 18:30:09.392975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.384 [2024-07-22 18:30:09.393602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.384 [2024-07-22 18:30:09.393629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.384 [2024-07-22 18:30:09.393643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:22:57.384 [2024-07-22 18:30:09.393654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.384 [2024-07-22 18:30:09.393847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.384 [2024-07-22 18:30:09.393869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.384 [2024-07-22 18:30:09.393882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:22:57.384 [2024-07-22 18:30:09.393893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.412055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.412117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.643 [2024-07-22 18:30:09.412137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.130 ms 00:22:57.643 [2024-07-22 18:30:09.412149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.428928] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:57.643 [2024-07-22 18:30:09.428988] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:57.643 [2024-07-22 18:30:09.429014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.429028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:57.643 [2024-07-22 18:30:09.429043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.684 ms 00:22:57.643 [2024-07-22 18:30:09.429054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.458692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.458776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:57.643 [2024-07-22 18:30:09.458798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.564 ms 00:22:57.643 [2024-07-22 18:30:09.458824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.475538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.475599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:57.643 [2024-07-22 18:30:09.475619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.630 ms 00:22:57.643 [2024-07-22 18:30:09.475630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.490920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.490983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:57.643 [2024-07-22 18:30:09.491004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.209 ms 00:22:57.643 [2024-07-22 18:30:09.491015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.492010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.492055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:57.643 [2024-07-22 18:30:09.492073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:22:57.643 [2024-07-22 18:30:09.492085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.569167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.569251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:57.643 [2024-07-22 18:30:09.569273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.053 ms 00:22:57.643 [2024-07-22 18:30:09.569285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.585098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:57.643 [2024-07-22 18:30:09.589357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.589410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:57.643 [2024-07-22 18:30:09.589430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.990 ms 00:22:57.643 [2024-07-22 18:30:09.589441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.589592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.589614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:57.643 [2024-07-22 18:30:09.589628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:57.643 [2024-07-22 18:30:09.589640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.589760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.589783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:57.643 [2024-07-22 18:30:09.589803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:57.643 [2024-07-22 18:30:09.589815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.589852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.589870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:57.643 [2024-07-22 18:30:09.589883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:57.643 [2024-07-22 18:30:09.589894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.589938] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:57.643 [2024-07-22 18:30:09.589957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.589970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:57.643 [2024-07-22 18:30:09.589982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:57.643 [2024-07-22 18:30:09.589998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.622065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.622141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:57.643 [2024-07-22 18:30:09.622163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.039 ms 00:22:57.643 [2024-07-22 18:30:09.622176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.622295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.643 [2024-07-22 18:30:09.622317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:57.643 [2024-07-22 18:30:09.622344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:57.643 [2024-07-22 18:30:09.622356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.643 [2024-07-22 18:30:09.623818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.578 ms, result 0 00:23:36.003  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (26 MBps) Copying: 136/1024 [MB] (26 MBps) Copying: 162/1024 [MB] (26 MBps) Copying: 190/1024 [MB] (27 MBps) Copying: 220/1024 [MB] (29 MBps) Copying: 250/1024 [MB] (30 MBps) Copying: 281/1024 [MB] (30 MBps) Copying: 311/1024 [MB] (30 MBps) Copying: 338/1024 [MB] (27 MBps) Copying: 366/1024 [MB] (27 MBps) Copying: 394/1024 [MB] (28 MBps) Copying: 422/1024 [MB] (27 MBps) Copying: 449/1024 [MB] (27 MBps) Copying: 475/1024 [MB] (26 MBps) Copying: 502/1024 [MB] (26 MBps) Copying: 528/1024 [MB] (26 MBps) Copying: 555/1024 [MB] (26 MBps) Copying: 581/1024 [MB] (26 MBps) Copying: 608/1024 [MB] (26 MBps) Copying: 634/1024 [MB] (25 MBps) Copying: 659/1024 [MB] (25 MBps) Copying: 686/1024 [MB] (26 MBps) Copying: 712/1024 [MB] (26 MBps) Copying: 738/1024 [MB] (26 MBps) Copying: 764/1024 [MB] (25 MBps) Copying: 789/1024 [MB] (25 MBps) Copying: 815/1024 [MB] (25 MBps) Copying: 839/1024 [MB] (24 MBps) Copying: 863/1024 [MB] (23 MBps) Copying: 888/1024 [MB] (25 MBps) Copying: 914/1024 [MB] (25 MBps) Copying: 940/1024 [MB] (25 MBps) Copying: 965/1024 [MB] (25 MBps) Copying: 991/1024 [MB] (25 MBps) Copying: 1017/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-22 18:30:47.896937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.897015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:36.003 [2024-07-22 18:30:47.897036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:36.003 [2024-07-22 18:30:47.897048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.003 [2024-07-22 18:30:47.897080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.003 [2024-07-22 18:30:47.900833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.900872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:36.003 [2024-07-22 18:30:47.900889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.729 ms 00:23:36.003 [2024-07-22 18:30:47.900901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.003 [2024-07-22 18:30:47.902666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.902753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:36.003 [2024-07-22 18:30:47.902770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:23:36.003 [2024-07-22 18:30:47.902781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.003 [2024-07-22 18:30:47.919163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.919278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:36.003 [2024-07-22 18:30:47.919314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.352 ms 00:23:36.003 [2024-07-22 18:30:47.919325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.003 [2024-07-22 18:30:47.926271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.926325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:36.003 [2024-07-22 18:30:47.926387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.899 ms 00:23:36.003 [2024-07-22 18:30:47.926413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.003 [2024-07-22 18:30:47.960521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.960595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:36.003 [2024-07-22 18:30:47.960632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.980 ms 00:23:36.003 [2024-07-22 18:30:47.960644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.003 [2024-07-22 18:30:47.979622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.979729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.003 [2024-07-22 18:30:47.979751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.885 ms 00:23:36.003 [2024-07-22 18:30:47.979765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.003 [2024-07-22 18:30:47.979993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.003 [2024-07-22 18:30:47.980016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:36.003 [2024-07-22 18:30:47.980030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:23:36.003 [2024-07-22 18:30:47.980042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.268 [2024-07-22 18:30:48.013423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.268 [2024-07-22 18:30:48.013497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:36.268 [2024-07-22 18:30:48.013517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.349 ms 00:23:36.268 [2024-07-22 18:30:48.013528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.268 [2024-07-22 18:30:48.045144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.268 [2024-07-22 18:30:48.045209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:36.268 [2024-07-22 18:30:48.045230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.546 ms 00:23:36.268 [2024-07-22 18:30:48.045241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.268 [2024-07-22 18:30:48.078075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.268 [2024-07-22 18:30:48.078155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:36.268 [2024-07-22 18:30:48.078192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.750 ms 00:23:36.268 [2024-07-22 18:30:48.078222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.268 [2024-07-22 18:30:48.110952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.268 [2024-07-22 18:30:48.111049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:36.268 [2024-07-22 18:30:48.111069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.594 ms 00:23:36.268 [2024-07-22 18:30:48.111081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.268 [2024-07-22 18:30:48.111150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:36.268 [2024-07-22 18:30:48.111175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:36.268 [2024-07-22 18:30:48.111707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.111988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:36.269 [2024-07-22 18:30:48.112464] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:36.269 [2024-07-22 18:30:48.112475] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84e247ec-79b0-422c-8ee5-87972b0ec164 00:23:36.269 [2024-07-22 18:30:48.112488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:36.269 [2024-07-22 18:30:48.112499] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:36.269 [2024-07-22 18:30:48.112510] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:36.269 [2024-07-22 18:30:48.112531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:36.269 [2024-07-22 18:30:48.112541] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:36.269 [2024-07-22 18:30:48.112552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:36.269 [2024-07-22 18:30:48.112563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:36.269 [2024-07-22 18:30:48.112573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:36.269 [2024-07-22 18:30:48.112584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:36.269 [2024-07-22 18:30:48.112595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.269 [2024-07-22 18:30:48.112607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:36.269 [2024-07-22 18:30:48.112619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:23:36.269 [2024-07-22 18:30:48.112630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.269 [2024-07-22 18:30:48.130630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.269 [2024-07-22 18:30:48.130739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:36.269 [2024-07-22 18:30:48.130760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.924 ms 00:23:36.269 [2024-07-22 18:30:48.130785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.269 [2024-07-22 18:30:48.131272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.269 [2024-07-22 18:30:48.131303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:36.269 [2024-07-22 18:30:48.131319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:23:36.269 [2024-07-22 18:30:48.131331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.269 [2024-07-22 18:30:48.171812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.269 [2024-07-22 18:30:48.171883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.269 [2024-07-22 18:30:48.171902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.269 [2024-07-22 18:30:48.171914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.269 [2024-07-22 18:30:48.172009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.269 [2024-07-22 18:30:48.172025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.269 [2024-07-22 18:30:48.172037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.269 [2024-07-22 18:30:48.172049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.269 [2024-07-22 18:30:48.172152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.269 [2024-07-22 18:30:48.172179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.270 [2024-07-22 18:30:48.172192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.270 [2024-07-22 18:30:48.172203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.270 [2024-07-22 18:30:48.172227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.270 [2024-07-22 18:30:48.172241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.270 [2024-07-22 18:30:48.172253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.270 [2024-07-22 18:30:48.172264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.285439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.285525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.529 [2024-07-22 18:30:48.285574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.285586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.382445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.382535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.529 [2024-07-22 18:30:48.382571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.382584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.382684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.382703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.529 [2024-07-22 18:30:48.382764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.382791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.382872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.382889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.529 [2024-07-22 18:30:48.382900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.382911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.383048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.383067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.529 [2024-07-22 18:30:48.383080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.383097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.383150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.383168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:36.529 [2024-07-22 18:30:48.383180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.383192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.383236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.383257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.529 [2024-07-22 18:30:48.383270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.383281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.383339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.529 [2024-07-22 18:30:48.383356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.529 [2024-07-22 18:30:48.383368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.529 [2024-07-22 18:30:48.383379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.529 [2024-07-22 18:30:48.383539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.572 ms, result 0 00:23:38.434 00:23:38.434 00:23:38.434 18:30:50 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:38.434 [2024-07-22 18:30:50.156215] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:38.434 [2024-07-22 18:30:50.156801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82185 ] 00:23:38.434 [2024-07-22 18:30:50.338067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.693 [2024-07-22 18:30:50.594395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.951 [2024-07-22 18:30:50.960327] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.951 [2024-07-22 18:30:50.960409] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:39.211 [2024-07-22 18:30:51.126968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.127043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:39.211 [2024-07-22 18:30:51.127065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:39.211 [2024-07-22 18:30:51.127077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.127158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.127180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:39.211 [2024-07-22 18:30:51.127192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:39.211 [2024-07-22 18:30:51.127208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.127239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:39.211 [2024-07-22 18:30:51.128272] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:39.211 [2024-07-22 18:30:51.128315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.128334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:39.211 [2024-07-22 18:30:51.128347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:23:39.211 [2024-07-22 18:30:51.128359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.130329] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:39.211 [2024-07-22 18:30:51.148911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.148991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:39.211 [2024-07-22 18:30:51.149019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.580 ms 00:23:39.211 [2024-07-22 18:30:51.149032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.149155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.149176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:39.211 [2024-07-22 18:30:51.149194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:39.211 [2024-07-22 18:30:51.149205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.159261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.159330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:39.211 [2024-07-22 18:30:51.159351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.925 ms 00:23:39.211 [2024-07-22 18:30:51.159363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.159503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.159523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:39.211 [2024-07-22 18:30:51.159537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:39.211 [2024-07-22 18:30:51.159554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.159661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.211 [2024-07-22 18:30:51.159707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:39.211 [2024-07-22 18:30:51.159724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:39.211 [2024-07-22 18:30:51.159736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.211 [2024-07-22 18:30:51.159774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:39.212 [2024-07-22 18:30:51.165351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.212 [2024-07-22 18:30:51.165393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:39.212 [2024-07-22 18:30:51.165409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.587 ms 00:23:39.212 [2024-07-22 18:30:51.165420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.212 [2024-07-22 18:30:51.165497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.212 [2024-07-22 18:30:51.165542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:39.212 [2024-07-22 18:30:51.165568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:39.212 [2024-07-22 18:30:51.165579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.212 [2024-07-22 18:30:51.165627] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:39.212 [2024-07-22 18:30:51.165659] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:39.212 [2024-07-22 18:30:51.165700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:39.212 [2024-07-22 18:30:51.165722] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:39.212 [2024-07-22 18:30:51.165862] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:39.212 [2024-07-22 18:30:51.165880] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:39.212 [2024-07-22 18:30:51.165894] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:39.212 [2024-07-22 18:30:51.165909] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:39.212 [2024-07-22 18:30:51.165922] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:39.212 [2024-07-22 18:30:51.165934] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:39.212 [2024-07-22 18:30:51.165960] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:39.212 [2024-07-22 18:30:51.165971] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:39.212 [2024-07-22 18:30:51.165982] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:39.212 [2024-07-22 18:30:51.165999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.212 [2024-07-22 18:30:51.166010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:39.212 [2024-07-22 18:30:51.166022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:23:39.212 [2024-07-22 18:30:51.166032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.212 [2024-07-22 18:30:51.166140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.212 [2024-07-22 18:30:51.166156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:39.212 [2024-07-22 18:30:51.166168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:39.212 [2024-07-22 18:30:51.166185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.212 [2024-07-22 18:30:51.166294] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:39.212 [2024-07-22 18:30:51.166316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:39.212 [2024-07-22 18:30:51.166329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:39.212 [2024-07-22 18:30:51.166363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:39.212 [2024-07-22 18:30:51.166397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:39.212 [2024-07-22 18:30:51.166432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:39.212 [2024-07-22 18:30:51.166443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:39.212 [2024-07-22 18:30:51.166453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:39.212 [2024-07-22 18:30:51.166463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:39.212 [2024-07-22 18:30:51.166503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:39.212 [2024-07-22 18:30:51.166513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:39.212 [2024-07-22 18:30:51.166532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:39.212 [2024-07-22 18:30:51.166574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:39.212 [2024-07-22 18:30:51.166604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:39.212 [2024-07-22 18:30:51.166632] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:39.212 [2024-07-22 18:30:51.166660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:39.212 [2024-07-22 18:30:51.166688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:39.212 [2024-07-22 18:30:51.166707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:39.212 [2024-07-22 18:30:51.166717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:39.212 [2024-07-22 18:30:51.166726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:39.212 [2024-07-22 18:30:51.166736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:39.212 [2024-07-22 18:30:51.166745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:39.212 [2024-07-22 18:30:51.166755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:39.212 [2024-07-22 18:30:51.166774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:39.212 [2024-07-22 18:30:51.166815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166827] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:39.212 [2024-07-22 18:30:51.166854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:39.212 [2024-07-22 18:30:51.166865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.212 [2024-07-22 18:30:51.166887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:39.212 [2024-07-22 18:30:51.166897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:39.212 [2024-07-22 18:30:51.166909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:39.212 [2024-07-22 18:30:51.166920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:39.212 [2024-07-22 18:30:51.166930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:39.212 [2024-07-22 18:30:51.166941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:39.212 [2024-07-22 18:30:51.166953] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:39.212 [2024-07-22 18:30:51.166968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:39.212 [2024-07-22 18:30:51.166981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:39.212 [2024-07-22 18:30:51.166992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:39.212 [2024-07-22 18:30:51.167004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:39.212 [2024-07-22 18:30:51.167015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:39.212 [2024-07-22 18:30:51.167027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:39.212 [2024-07-22 18:30:51.167038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:39.212 [2024-07-22 18:30:51.167049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:39.212 [2024-07-22 18:30:51.167060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:39.212 [2024-07-22 18:30:51.167071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:39.212 [2024-07-22 18:30:51.167083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:39.212 [2024-07-22 18:30:51.167094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:39.212 [2024-07-22 18:30:51.167121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:39.212 [2024-07-22 18:30:51.167133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:39.212 [2024-07-22 18:30:51.167145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:39.213 [2024-07-22 18:30:51.167156] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:39.213 [2024-07-22 18:30:51.167169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:39.213 [2024-07-22 18:30:51.167197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:39.213 [2024-07-22 18:30:51.167210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:39.213 [2024-07-22 18:30:51.167222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:39.213 [2024-07-22 18:30:51.167234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:39.213 [2024-07-22 18:30:51.167247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.213 [2024-07-22 18:30:51.167259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:39.213 [2024-07-22 18:30:51.167271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:23:39.213 [2024-07-22 18:30:51.167282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.213 [2024-07-22 18:30:51.222630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.213 [2024-07-22 18:30:51.222719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:39.213 [2024-07-22 18:30:51.222743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.262 ms 00:23:39.213 [2024-07-22 18:30:51.222756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.213 [2024-07-22 18:30:51.222884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.213 [2024-07-22 18:30:51.222903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:39.213 [2024-07-22 18:30:51.222917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:39.213 [2024-07-22 18:30:51.222928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.268482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.268547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:39.472 [2024-07-22 18:30:51.268582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.456 ms 00:23:39.472 [2024-07-22 18:30:51.268593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.268664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.268680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:39.472 [2024-07-22 18:30:51.268747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:39.472 [2024-07-22 18:30:51.268768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.269433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.269475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:39.472 [2024-07-22 18:30:51.269491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:23:39.472 [2024-07-22 18:30:51.269502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.269686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.269737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:39.472 [2024-07-22 18:30:51.269749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:23:39.472 [2024-07-22 18:30:51.269761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.289017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.289098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:39.472 [2024-07-22 18:30:51.289134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.220 ms 00:23:39.472 [2024-07-22 18:30:51.289146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.307915] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:39.472 [2024-07-22 18:30:51.308025] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:39.472 [2024-07-22 18:30:51.308048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.308061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:39.472 [2024-07-22 18:30:51.308077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.708 ms 00:23:39.472 [2024-07-22 18:30:51.308089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.340177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.340288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:39.472 [2024-07-22 18:30:51.340340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.998 ms 00:23:39.472 [2024-07-22 18:30:51.340356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.358610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.358703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:39.472 [2024-07-22 18:30:51.358724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.162 ms 00:23:39.472 [2024-07-22 18:30:51.358736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.472 [2024-07-22 18:30:51.377032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.472 [2024-07-22 18:30:51.377118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:39.472 [2024-07-22 18:30:51.377138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.215 ms 00:23:39.473 [2024-07-22 18:30:51.377150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.473 [2024-07-22 18:30:51.378231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.473 [2024-07-22 18:30:51.378266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:39.473 [2024-07-22 18:30:51.378282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:23:39.473 [2024-07-22 18:30:51.378293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.473 [2024-07-22 18:30:51.463195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.473 [2024-07-22 18:30:51.463283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:39.473 [2024-07-22 18:30:51.463333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.869 ms 00:23:39.473 [2024-07-22 18:30:51.463373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.473 [2024-07-22 18:30:51.477106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:39.473 [2024-07-22 18:30:51.481681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.473 [2024-07-22 18:30:51.481755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:39.473 [2024-07-22 18:30:51.481789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:23:39.473 [2024-07-22 18:30:51.481800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.473 [2024-07-22 18:30:51.481909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.473 [2024-07-22 18:30:51.481927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:39.473 [2024-07-22 18:30:51.481940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:39.473 [2024-07-22 18:30:51.481951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.473 [2024-07-22 18:30:51.482078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.473 [2024-07-22 18:30:51.482102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:39.473 [2024-07-22 18:30:51.482116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:39.473 [2024-07-22 18:30:51.482127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.473 [2024-07-22 18:30:51.482159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.473 [2024-07-22 18:30:51.482174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:39.473 [2024-07-22 18:30:51.482186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:39.473 [2024-07-22 18:30:51.482198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.473 [2024-07-22 18:30:51.482239] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:39.473 [2024-07-22 18:30:51.482255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.473 [2024-07-22 18:30:51.482271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:39.473 [2024-07-22 18:30:51.482283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:39.473 [2024-07-22 18:30:51.482294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.731 [2024-07-22 18:30:51.515644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.731 [2024-07-22 18:30:51.515742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:39.731 [2024-07-22 18:30:51.515799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.324 ms 00:23:39.732 [2024-07-22 18:30:51.515812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.732 [2024-07-22 18:30:51.515914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.732 [2024-07-22 18:30:51.515934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:39.732 [2024-07-22 18:30:51.515947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:39.732 [2024-07-22 18:30:51.515958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.732 [2024-07-22 18:30:51.517486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.883 ms, result 0 00:24:20.432  Copying: 24/1024 [MB] (24 MBps) Copying: 50/1024 [MB] (26 MBps) Copying: 75/1024 [MB] (24 MBps) Copying: 100/1024 [MB] (25 MBps) Copying: 127/1024 [MB] (26 MBps) Copying: 154/1024 [MB] (27 MBps) Copying: 181/1024 [MB] (27 MBps) Copying: 207/1024 [MB] (25 MBps) Copying: 233/1024 [MB] (25 MBps) Copying: 259/1024 [MB] (25 MBps) Copying: 284/1024 [MB] (25 MBps) Copying: 310/1024 [MB] (25 MBps) Copying: 337/1024 [MB] (26 MBps) Copying: 360/1024 [MB] (23 MBps) Copying: 386/1024 [MB] (26 MBps) Copying: 411/1024 [MB] (24 MBps) Copying: 437/1024 [MB] (25 MBps) Copying: 463/1024 [MB] (26 MBps) Copying: 488/1024 [MB] (24 MBps) Copying: 513/1024 [MB] (25 MBps) Copying: 538/1024 [MB] (24 MBps) Copying: 564/1024 [MB] (26 MBps) Copying: 590/1024 [MB] (26 MBps) Copying: 615/1024 [MB] (25 MBps) Copying: 641/1024 [MB] (25 MBps) Copying: 667/1024 [MB] (26 MBps) Copying: 692/1024 [MB] (24 MBps) Copying: 717/1024 [MB] (25 MBps) Copying: 742/1024 [MB] (24 MBps) Copying: 766/1024 [MB] (24 MBps) Copying: 791/1024 [MB] (24 MBps) Copying: 816/1024 [MB] (25 MBps) Copying: 842/1024 [MB] (25 MBps) Copying: 868/1024 [MB] (26 MBps) Copying: 895/1024 [MB] (26 MBps) Copying: 920/1024 [MB] (25 MBps) Copying: 946/1024 [MB] (26 MBps) Copying: 972/1024 [MB] (25 MBps) Copying: 998/1024 [MB] (25 MBps) Copying: 1023/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-22 18:31:32.149449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.149961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:20.433 [2024-07-22 18:31:32.150000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:20.433 [2024-07-22 18:31:32.150016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.150066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:20.433 [2024-07-22 18:31:32.155929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.155976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:20.433 [2024-07-22 18:31:32.155995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.831 ms 00:24:20.433 [2024-07-22 18:31:32.156010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.156324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.156347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:20.433 [2024-07-22 18:31:32.156362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:24:20.433 [2024-07-22 18:31:32.156377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.161860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.161914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:20.433 [2024-07-22 18:31:32.161932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.458 ms 00:24:20.433 [2024-07-22 18:31:32.161946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.171010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.171072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:20.433 [2024-07-22 18:31:32.171091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.031 ms 00:24:20.433 [2024-07-22 18:31:32.171107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.212143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.212216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:20.433 [2024-07-22 18:31:32.212252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.924 ms 00:24:20.433 [2024-07-22 18:31:32.212267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.233743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.233801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:20.433 [2024-07-22 18:31:32.233834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.437 ms 00:24:20.433 [2024-07-22 18:31:32.233849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.234049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.234076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:20.433 [2024-07-22 18:31:32.234099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:24:20.433 [2024-07-22 18:31:32.234114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.272256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.272308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:20.433 [2024-07-22 18:31:32.272329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.116 ms 00:24:20.433 [2024-07-22 18:31:32.272342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.310115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.310166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:20.433 [2024-07-22 18:31:32.310186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.740 ms 00:24:20.433 [2024-07-22 18:31:32.310200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.344844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.344886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:20.433 [2024-07-22 18:31:32.344924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.611 ms 00:24:20.433 [2024-07-22 18:31:32.344935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.373800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.433 [2024-07-22 18:31:32.373840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:20.433 [2024-07-22 18:31:32.373871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.788 ms 00:24:20.433 [2024-07-22 18:31:32.373882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.433 [2024-07-22 18:31:32.373907] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:20.433 [2024-07-22 18:31:32.373928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.373942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.373955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.373967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.373978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.373990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:20.433 [2024-07-22 18:31:32.374429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.374989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:20.434 [2024-07-22 18:31:32.375240] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:20.434 [2024-07-22 18:31:32.375252] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84e247ec-79b0-422c-8ee5-87972b0ec164 00:24:20.434 [2024-07-22 18:31:32.375264] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:20.434 [2024-07-22 18:31:32.375281] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:20.434 [2024-07-22 18:31:32.375292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:20.434 [2024-07-22 18:31:32.375303] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:20.434 [2024-07-22 18:31:32.375314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:20.434 [2024-07-22 18:31:32.375325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:20.434 [2024-07-22 18:31:32.375336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:20.434 [2024-07-22 18:31:32.375345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:20.434 [2024-07-22 18:31:32.375355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:20.434 [2024-07-22 18:31:32.375367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.434 [2024-07-22 18:31:32.375378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:20.434 [2024-07-22 18:31:32.375416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:24:20.434 [2024-07-22 18:31:32.375449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.434 [2024-07-22 18:31:32.391381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.434 [2024-07-22 18:31:32.391466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:20.434 [2024-07-22 18:31:32.391495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.887 ms 00:24:20.434 [2024-07-22 18:31:32.391507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.434 [2024-07-22 18:31:32.392100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.434 [2024-07-22 18:31:32.392119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:20.434 [2024-07-22 18:31:32.392132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:24:20.434 [2024-07-22 18:31:32.392143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.434 [2024-07-22 18:31:32.431071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.434 [2024-07-22 18:31:32.431178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:20.434 [2024-07-22 18:31:32.431227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.434 [2024-07-22 18:31:32.431239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.434 [2024-07-22 18:31:32.431318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.434 [2024-07-22 18:31:32.431334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:20.434 [2024-07-22 18:31:32.431346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.434 [2024-07-22 18:31:32.431358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.434 [2024-07-22 18:31:32.431511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.434 [2024-07-22 18:31:32.431533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:20.434 [2024-07-22 18:31:32.431547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.434 [2024-07-22 18:31:32.431557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.434 [2024-07-22 18:31:32.431581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.434 [2024-07-22 18:31:32.431595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:20.434 [2024-07-22 18:31:32.431608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.434 [2024-07-22 18:31:32.431619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.540499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.540588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:20.694 [2024-07-22 18:31:32.540608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.540621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.631963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.632034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:20.694 [2024-07-22 18:31:32.632055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.632068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.632151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.632192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:20.694 [2024-07-22 18:31:32.632205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.632216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.632295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.632310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:20.694 [2024-07-22 18:31:32.632322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.632333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.632471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.632498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:20.694 [2024-07-22 18:31:32.632511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.632522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.632573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.632592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:20.694 [2024-07-22 18:31:32.632604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.632615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.632662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.632677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:20.694 [2024-07-22 18:31:32.632696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.632707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.632788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.694 [2024-07-22 18:31:32.632808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:20.694 [2024-07-22 18:31:32.632821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.694 [2024-07-22 18:31:32.632832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.694 [2024-07-22 18:31:32.632982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 483.500 ms, result 0 00:24:22.070 00:24:22.070 00:24:22.070 18:31:33 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:24.602 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:24.602 18:31:36 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:24.602 [2024-07-22 18:31:36.115478] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:24.602 [2024-07-22 18:31:36.115626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82639 ] 00:24:24.602 [2024-07-22 18:31:36.284025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.602 [2024-07-22 18:31:36.550383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.171 [2024-07-22 18:31:36.907302] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.171 [2024-07-22 18:31:36.907431] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.171 [2024-07-22 18:31:37.072301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.072373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:25.171 [2024-07-22 18:31:37.072405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:25.171 [2024-07-22 18:31:37.072417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.072505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.072526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.171 [2024-07-22 18:31:37.072539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:25.171 [2024-07-22 18:31:37.072555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.072588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:25.171 [2024-07-22 18:31:37.073513] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:25.171 [2024-07-22 18:31:37.073557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.073577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.171 [2024-07-22 18:31:37.073590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:24:25.171 [2024-07-22 18:31:37.073602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.075580] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:25.171 [2024-07-22 18:31:37.092729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.092776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:25.171 [2024-07-22 18:31:37.092812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.151 ms 00:24:25.171 [2024-07-22 18:31:37.092824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.092899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.092919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:25.171 [2024-07-22 18:31:37.092937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:25.171 [2024-07-22 18:31:37.092948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.101756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.101807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.171 [2024-07-22 18:31:37.101841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.711 ms 00:24:25.171 [2024-07-22 18:31:37.101853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.101960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.101983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.171 [2024-07-22 18:31:37.101995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:25.171 [2024-07-22 18:31:37.102006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.102078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.102097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:25.171 [2024-07-22 18:31:37.102109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:25.171 [2024-07-22 18:31:37.102120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.102157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:25.171 [2024-07-22 18:31:37.107353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.107414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.171 [2024-07-22 18:31:37.107454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.205 ms 00:24:25.171 [2024-07-22 18:31:37.107472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.107537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.171 [2024-07-22 18:31:37.107561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:25.171 [2024-07-22 18:31:37.107582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:25.171 [2024-07-22 18:31:37.107599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.171 [2024-07-22 18:31:37.107672] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:25.171 [2024-07-22 18:31:37.107707] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:25.171 [2024-07-22 18:31:37.107788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:25.172 [2024-07-22 18:31:37.107818] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:25.172 [2024-07-22 18:31:37.107924] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:25.172 [2024-07-22 18:31:37.107941] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:25.172 [2024-07-22 18:31:37.107956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:25.172 [2024-07-22 18:31:37.107971] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:25.172 [2024-07-22 18:31:37.107984] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:25.172 [2024-07-22 18:31:37.107996] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:25.172 [2024-07-22 18:31:37.108008] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:25.172 [2024-07-22 18:31:37.108018] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:25.172 [2024-07-22 18:31:37.108029] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:25.172 [2024-07-22 18:31:37.108041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.172 [2024-07-22 18:31:37.108057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:25.172 [2024-07-22 18:31:37.108069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:24:25.172 [2024-07-22 18:31:37.108093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.172 [2024-07-22 18:31:37.108188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.172 [2024-07-22 18:31:37.108203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:25.172 [2024-07-22 18:31:37.108215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:25.172 [2024-07-22 18:31:37.108226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.172 [2024-07-22 18:31:37.108332] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:25.172 [2024-07-22 18:31:37.108355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:25.172 [2024-07-22 18:31:37.108374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:25.172 [2024-07-22 18:31:37.108408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:25.172 [2024-07-22 18:31:37.108440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.172 [2024-07-22 18:31:37.108460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:25.172 [2024-07-22 18:31:37.108470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:25.172 [2024-07-22 18:31:37.108481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.172 [2024-07-22 18:31:37.108491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:25.172 [2024-07-22 18:31:37.108502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:25.172 [2024-07-22 18:31:37.108512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:25.172 [2024-07-22 18:31:37.108534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:25.172 [2024-07-22 18:31:37.108580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:25.172 [2024-07-22 18:31:37.108613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:25.172 [2024-07-22 18:31:37.108645] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108655] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:25.172 [2024-07-22 18:31:37.108691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:25.172 [2024-07-22 18:31:37.108728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.172 [2024-07-22 18:31:37.108749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:25.172 [2024-07-22 18:31:37.108760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:25.172 [2024-07-22 18:31:37.108771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.172 [2024-07-22 18:31:37.108783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:25.172 [2024-07-22 18:31:37.108794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:25.172 [2024-07-22 18:31:37.108804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:25.172 [2024-07-22 18:31:37.108825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:25.172 [2024-07-22 18:31:37.108836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108846] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:25.172 [2024-07-22 18:31:37.108858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:25.172 [2024-07-22 18:31:37.108869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.172 [2024-07-22 18:31:37.108892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:25.172 [2024-07-22 18:31:37.108903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:25.172 [2024-07-22 18:31:37.108915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:25.172 [2024-07-22 18:31:37.108926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:25.172 [2024-07-22 18:31:37.108936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:25.172 [2024-07-22 18:31:37.108948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:25.172 [2024-07-22 18:31:37.108960] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:25.172 [2024-07-22 18:31:37.108977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.172 [2024-07-22 18:31:37.108990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:25.172 [2024-07-22 18:31:37.109002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:25.172 [2024-07-22 18:31:37.109014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:25.172 [2024-07-22 18:31:37.109027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:25.172 [2024-07-22 18:31:37.109038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:25.172 [2024-07-22 18:31:37.109051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:25.172 [2024-07-22 18:31:37.109063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:25.172 [2024-07-22 18:31:37.109074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:25.172 [2024-07-22 18:31:37.109086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:25.172 [2024-07-22 18:31:37.109098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:25.172 [2024-07-22 18:31:37.109110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:25.172 [2024-07-22 18:31:37.109121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:25.172 [2024-07-22 18:31:37.109134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:25.172 [2024-07-22 18:31:37.109146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:25.172 [2024-07-22 18:31:37.109171] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:25.172 [2024-07-22 18:31:37.109184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.172 [2024-07-22 18:31:37.109197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:25.172 [2024-07-22 18:31:37.109209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:25.172 [2024-07-22 18:31:37.109221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:25.172 [2024-07-22 18:31:37.109233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:25.172 [2024-07-22 18:31:37.109246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.172 [2024-07-22 18:31:37.109263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:25.172 [2024-07-22 18:31:37.109275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:24:25.172 [2024-07-22 18:31:37.109287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.173 [2024-07-22 18:31:37.159085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.173 [2024-07-22 18:31:37.159170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.173 [2024-07-22 18:31:37.159209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.731 ms 00:24:25.173 [2024-07-22 18:31:37.159221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.173 [2024-07-22 18:31:37.159354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.173 [2024-07-22 18:31:37.159371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:25.173 [2024-07-22 18:31:37.159384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:25.173 [2024-07-22 18:31:37.159443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.431 [2024-07-22 18:31:37.204675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.431 [2024-07-22 18:31:37.204758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.431 [2024-07-22 18:31:37.204779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.105 ms 00:24:25.431 [2024-07-22 18:31:37.204791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.431 [2024-07-22 18:31:37.204865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.431 [2024-07-22 18:31:37.204882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.431 [2024-07-22 18:31:37.204894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.431 [2024-07-22 18:31:37.204906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.431 [2024-07-22 18:31:37.205534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.205568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.432 [2024-07-22 18:31:37.205584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:24:25.432 [2024-07-22 18:31:37.205595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.205787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.205812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.432 [2024-07-22 18:31:37.205830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:24:25.432 [2024-07-22 18:31:37.205842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.224391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.224437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.432 [2024-07-22 18:31:37.224472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:24:25.432 [2024-07-22 18:31:37.224484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.241636] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:25.432 [2024-07-22 18:31:37.241710] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:25.432 [2024-07-22 18:31:37.241763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.241775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:25.432 [2024-07-22 18:31:37.241789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.119 ms 00:24:25.432 [2024-07-22 18:31:37.241815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.271978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.272058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:25.432 [2024-07-22 18:31:37.272092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.097 ms 00:24:25.432 [2024-07-22 18:31:37.272105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.287948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.287995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:25.432 [2024-07-22 18:31:37.288031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.754 ms 00:24:25.432 [2024-07-22 18:31:37.288043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.303866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.303912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:25.432 [2024-07-22 18:31:37.303930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.760 ms 00:24:25.432 [2024-07-22 18:31:37.303942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.304910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.304946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:25.432 [2024-07-22 18:31:37.304963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:24:25.432 [2024-07-22 18:31:37.304975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.384930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.385001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:25.432 [2024-07-22 18:31:37.385040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.929 ms 00:24:25.432 [2024-07-22 18:31:37.385052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.398516] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:25.432 [2024-07-22 18:31:37.403052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.403091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:25.432 [2024-07-22 18:31:37.403142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.887 ms 00:24:25.432 [2024-07-22 18:31:37.403153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.403279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.403298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:25.432 [2024-07-22 18:31:37.403312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:25.432 [2024-07-22 18:31:37.403323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.403452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.403488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:25.432 [2024-07-22 18:31:37.403509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:25.432 [2024-07-22 18:31:37.403527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.403575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.403592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:25.432 [2024-07-22 18:31:37.403604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:25.432 [2024-07-22 18:31:37.403615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.403658] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:25.432 [2024-07-22 18:31:37.403675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.403688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:25.432 [2024-07-22 18:31:37.403762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:25.432 [2024-07-22 18:31:37.403774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.435733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.435793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:25.432 [2024-07-22 18:31:37.435828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.931 ms 00:24:25.432 [2024-07-22 18:31:37.435840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.435926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.432 [2024-07-22 18:31:37.435956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:25.432 [2024-07-22 18:31:37.435970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:25.432 [2024-07-22 18:31:37.435982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.432 [2024-07-22 18:31:37.437376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 364.563 ms, result 0 00:25:05.966  Copying: 26/1024 [MB] (26 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 103/1024 [MB] (26 MBps) Copying: 129/1024 [MB] (26 MBps) Copying: 156/1024 [MB] (26 MBps) Copying: 182/1024 [MB] (26 MBps) Copying: 207/1024 [MB] (25 MBps) Copying: 234/1024 [MB] (27 MBps) Copying: 261/1024 [MB] (26 MBps) Copying: 287/1024 [MB] (26 MBps) Copying: 314/1024 [MB] (26 MBps) Copying: 339/1024 [MB] (25 MBps) Copying: 366/1024 [MB] (26 MBps) Copying: 393/1024 [MB] (26 MBps) Copying: 419/1024 [MB] (26 MBps) Copying: 446/1024 [MB] (26 MBps) Copying: 472/1024 [MB] (26 MBps) Copying: 498/1024 [MB] (25 MBps) Copying: 525/1024 [MB] (27 MBps) Copying: 553/1024 [MB] (28 MBps) Copying: 578/1024 [MB] (24 MBps) Copying: 606/1024 [MB] (28 MBps) Copying: 634/1024 [MB] (28 MBps) Copying: 662/1024 [MB] (28 MBps) Copying: 688/1024 [MB] (25 MBps) Copying: 714/1024 [MB] (26 MBps) Copying: 742/1024 [MB] (27 MBps) Copying: 770/1024 [MB] (27 MBps) Copying: 798/1024 [MB] (28 MBps) Copying: 826/1024 [MB] (27 MBps) Copying: 851/1024 [MB] (25 MBps) Copying: 875/1024 [MB] (24 MBps) Copying: 899/1024 [MB] (23 MBps) Copying: 923/1024 [MB] (24 MBps) Copying: 947/1024 [MB] (23 MBps) Copying: 968/1024 [MB] (21 MBps) Copying: 991/1024 [MB] (22 MBps) Copying: 1014/1024 [MB] (22 MBps) Copying: 1048136/1048576 [kB] (9044 kBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-22 18:32:17.954309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.966 [2024-07-22 18:32:17.954390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:05.966 [2024-07-22 18:32:17.954412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:05.966 [2024-07-22 18:32:17.954425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.966 [2024-07-22 18:32:17.956485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:05.966 [2024-07-22 18:32:17.962982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.966 [2024-07-22 18:32:17.963027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:05.966 [2024-07-22 18:32:17.963045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.434 ms 00:25:05.966 [2024-07-22 18:32:17.963056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.966 [2024-07-22 18:32:17.974984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.966 [2024-07-22 18:32:17.975033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:05.966 [2024-07-22 18:32:17.975052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.246 ms 00:25:05.966 [2024-07-22 18:32:17.975065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.224 [2024-07-22 18:32:17.998270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.224 [2024-07-22 18:32:17.998336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:06.224 [2024-07-22 18:32:17.998356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.180 ms 00:25:06.224 [2024-07-22 18:32:17.998368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.224 [2024-07-22 18:32:18.004974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.224 [2024-07-22 18:32:18.005011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:06.224 [2024-07-22 18:32:18.005026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.564 ms 00:25:06.224 [2024-07-22 18:32:18.005037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.224 [2024-07-22 18:32:18.036265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.224 [2024-07-22 18:32:18.036326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:06.224 [2024-07-22 18:32:18.036344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.173 ms 00:25:06.224 [2024-07-22 18:32:18.036356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.224 [2024-07-22 18:32:18.054037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.224 [2024-07-22 18:32:18.054082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:06.224 [2024-07-22 18:32:18.054101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.633 ms 00:25:06.224 [2024-07-22 18:32:18.054131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.224 [2024-07-22 18:32:18.150621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.224 [2024-07-22 18:32:18.150739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:06.224 [2024-07-22 18:32:18.150763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.393 ms 00:25:06.224 [2024-07-22 18:32:18.150777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.224 [2024-07-22 18:32:18.182518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.224 [2024-07-22 18:32:18.182572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:06.224 [2024-07-22 18:32:18.182591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.715 ms 00:25:06.224 [2024-07-22 18:32:18.182603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.224 [2024-07-22 18:32:18.212729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.224 [2024-07-22 18:32:18.212780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:06.224 [2024-07-22 18:32:18.212799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.076 ms 00:25:06.224 [2024-07-22 18:32:18.212811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.483 [2024-07-22 18:32:18.242928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.483 [2024-07-22 18:32:18.242982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:06.483 [2024-07-22 18:32:18.243018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.062 ms 00:25:06.483 [2024-07-22 18:32:18.243030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.483 [2024-07-22 18:32:18.273159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.483 [2024-07-22 18:32:18.273221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:06.483 [2024-07-22 18:32:18.273241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.967 ms 00:25:06.483 [2024-07-22 18:32:18.273253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.483 [2024-07-22 18:32:18.273306] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:06.483 [2024-07-22 18:32:18.273330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118784 / 261120 wr_cnt: 1 state: open 00:25:06.483 [2024-07-22 18:32:18.273345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:06.483 [2024-07-22 18:32:18.273514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.273998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:06.484 [2024-07-22 18:32:18.274592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:06.484 [2024-07-22 18:32:18.274603] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84e247ec-79b0-422c-8ee5-87972b0ec164 00:25:06.484 [2024-07-22 18:32:18.274616] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118784 00:25:06.484 [2024-07-22 18:32:18.274627] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119744 00:25:06.484 [2024-07-22 18:32:18.274638] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118784 00:25:06.484 [2024-07-22 18:32:18.274650] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:25:06.485 [2024-07-22 18:32:18.274661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:06.485 [2024-07-22 18:32:18.274690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:06.485 [2024-07-22 18:32:18.274703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:06.485 [2024-07-22 18:32:18.274713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:06.485 [2024-07-22 18:32:18.274723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:06.485 [2024-07-22 18:32:18.274734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.485 [2024-07-22 18:32:18.274751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:06.485 [2024-07-22 18:32:18.274764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:25:06.485 [2024-07-22 18:32:18.274775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.485 [2024-07-22 18:32:18.291513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.485 [2024-07-22 18:32:18.291556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:06.485 [2024-07-22 18:32:18.291589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.688 ms 00:25:06.485 [2024-07-22 18:32:18.291601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.485 [2024-07-22 18:32:18.292095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.485 [2024-07-22 18:32:18.292124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:06.485 [2024-07-22 18:32:18.292139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:25:06.485 [2024-07-22 18:32:18.292150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.485 [2024-07-22 18:32:18.330426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.485 [2024-07-22 18:32:18.330503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.485 [2024-07-22 18:32:18.330522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.485 [2024-07-22 18:32:18.330541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.485 [2024-07-22 18:32:18.330630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.485 [2024-07-22 18:32:18.330647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.485 [2024-07-22 18:32:18.330660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.485 [2024-07-22 18:32:18.330671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.485 [2024-07-22 18:32:18.330778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.485 [2024-07-22 18:32:18.330797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.485 [2024-07-22 18:32:18.330810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.485 [2024-07-22 18:32:18.330821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.485 [2024-07-22 18:32:18.330851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.485 [2024-07-22 18:32:18.330865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.485 [2024-07-22 18:32:18.330876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.485 [2024-07-22 18:32:18.330887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.485 [2024-07-22 18:32:18.436921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.485 [2024-07-22 18:32:18.436996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.485 [2024-07-22 18:32:18.437015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.485 [2024-07-22 18:32:18.437028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.554019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.744 [2024-07-22 18:32:18.554114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.744 [2024-07-22 18:32:18.554142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.744 [2024-07-22 18:32:18.554161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.554287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.744 [2024-07-22 18:32:18.554316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.744 [2024-07-22 18:32:18.554337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.744 [2024-07-22 18:32:18.554354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.554420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.744 [2024-07-22 18:32:18.554453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.744 [2024-07-22 18:32:18.554471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.744 [2024-07-22 18:32:18.554486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.554655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.744 [2024-07-22 18:32:18.554717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.744 [2024-07-22 18:32:18.554741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.744 [2024-07-22 18:32:18.554772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.554843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.744 [2024-07-22 18:32:18.554878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:06.744 [2024-07-22 18:32:18.554907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.744 [2024-07-22 18:32:18.554928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.554993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.744 [2024-07-22 18:32:18.555018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.744 [2024-07-22 18:32:18.555040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.744 [2024-07-22 18:32:18.555084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.555185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.744 [2024-07-22 18:32:18.555220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.744 [2024-07-22 18:32:18.555239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.744 [2024-07-22 18:32:18.555256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.744 [2024-07-22 18:32:18.555495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 602.113 ms, result 0 00:25:08.645 00:25:08.645 00:25:08.645 18:32:20 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:08.645 [2024-07-22 18:32:20.433671] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:08.645 [2024-07-22 18:32:20.433861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83075 ] 00:25:08.645 [2024-07-22 18:32:20.606779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.904 [2024-07-22 18:32:20.841890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.473 [2024-07-22 18:32:21.226821] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.474 [2024-07-22 18:32:21.226898] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.474 [2024-07-22 18:32:21.391347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.391422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.474 [2024-07-22 18:32:21.391449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.474 [2024-07-22 18:32:21.391461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.391538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.391559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.474 [2024-07-22 18:32:21.391572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:09.474 [2024-07-22 18:32:21.391588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.391620] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.474 [2024-07-22 18:32:21.392539] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.474 [2024-07-22 18:32:21.392572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.392590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.474 [2024-07-22 18:32:21.392604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:25:09.474 [2024-07-22 18:32:21.392615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.394521] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:09.474 [2024-07-22 18:32:21.411519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.411559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:09.474 [2024-07-22 18:32:21.411576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.999 ms 00:25:09.474 [2024-07-22 18:32:21.411588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.411666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.411706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:09.474 [2024-07-22 18:32:21.411725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:09.474 [2024-07-22 18:32:21.411736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.420177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.420219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.474 [2024-07-22 18:32:21.420235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.345 ms 00:25:09.474 [2024-07-22 18:32:21.420247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.420355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.420378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.474 [2024-07-22 18:32:21.420391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:09.474 [2024-07-22 18:32:21.420402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.420469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.420486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.474 [2024-07-22 18:32:21.420499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:09.474 [2024-07-22 18:32:21.420510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.420547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.474 [2024-07-22 18:32:21.425540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.425572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.474 [2024-07-22 18:32:21.425586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.002 ms 00:25:09.474 [2024-07-22 18:32:21.425597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.425644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.425659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.474 [2024-07-22 18:32:21.425672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:09.474 [2024-07-22 18:32:21.425699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.425768] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:09.474 [2024-07-22 18:32:21.425801] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:09.474 [2024-07-22 18:32:21.425843] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:09.474 [2024-07-22 18:32:21.425867] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:09.474 [2024-07-22 18:32:21.425971] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.474 [2024-07-22 18:32:21.425986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.474 [2024-07-22 18:32:21.426000] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:09.474 [2024-07-22 18:32:21.426015] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426028] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426040] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.474 [2024-07-22 18:32:21.426052] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.474 [2024-07-22 18:32:21.426063] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.474 [2024-07-22 18:32:21.426074] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.474 [2024-07-22 18:32:21.426085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.426101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.474 [2024-07-22 18:32:21.426113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:25:09.474 [2024-07-22 18:32:21.426123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.426216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.474 [2024-07-22 18:32:21.426231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.474 [2024-07-22 18:32:21.426242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:09.474 [2024-07-22 18:32:21.426252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.474 [2024-07-22 18:32:21.426357] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.474 [2024-07-22 18:32:21.426372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.474 [2024-07-22 18:32:21.426389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.474 [2024-07-22 18:32:21.426425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.474 [2024-07-22 18:32:21.426457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.474 [2024-07-22 18:32:21.426477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.474 [2024-07-22 18:32:21.426487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.474 [2024-07-22 18:32:21.426497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.474 [2024-07-22 18:32:21.426508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.474 [2024-07-22 18:32:21.426518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.474 [2024-07-22 18:32:21.426528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.474 [2024-07-22 18:32:21.426560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.474 [2024-07-22 18:32:21.426603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.474 [2024-07-22 18:32:21.426634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.474 [2024-07-22 18:32:21.426664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.474 [2024-07-22 18:32:21.426674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.474 [2024-07-22 18:32:21.426701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.475 [2024-07-22 18:32:21.426713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.475 [2024-07-22 18:32:21.426724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.475 [2024-07-22 18:32:21.426734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.475 [2024-07-22 18:32:21.426744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.475 [2024-07-22 18:32:21.426754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.475 [2024-07-22 18:32:21.426765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.475 [2024-07-22 18:32:21.426775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.475 [2024-07-22 18:32:21.426787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.475 [2024-07-22 18:32:21.426798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.475 [2024-07-22 18:32:21.426808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.475 [2024-07-22 18:32:21.426818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.475 [2024-07-22 18:32:21.426829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.475 [2024-07-22 18:32:21.426839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.475 [2024-07-22 18:32:21.426849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.475 [2024-07-22 18:32:21.426859] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.475 [2024-07-22 18:32:21.426870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.475 [2024-07-22 18:32:21.426881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.475 [2024-07-22 18:32:21.426892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.475 [2024-07-22 18:32:21.426903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.475 [2024-07-22 18:32:21.426914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.475 [2024-07-22 18:32:21.426924] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.475 [2024-07-22 18:32:21.426935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.475 [2024-07-22 18:32:21.426945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.475 [2024-07-22 18:32:21.426956] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.475 [2024-07-22 18:32:21.426968] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.475 [2024-07-22 18:32:21.426981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.475 [2024-07-22 18:32:21.426994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.475 [2024-07-22 18:32:21.427006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.475 [2024-07-22 18:32:21.427017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.475 [2024-07-22 18:32:21.427028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.475 [2024-07-22 18:32:21.427039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.475 [2024-07-22 18:32:21.427050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.475 [2024-07-22 18:32:21.427069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.475 [2024-07-22 18:32:21.427081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.475 [2024-07-22 18:32:21.427092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.475 [2024-07-22 18:32:21.427103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.475 [2024-07-22 18:32:21.427114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.475 [2024-07-22 18:32:21.427125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.475 [2024-07-22 18:32:21.427136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.475 [2024-07-22 18:32:21.427148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.475 [2024-07-22 18:32:21.427159] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.475 [2024-07-22 18:32:21.427171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.475 [2024-07-22 18:32:21.427184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.475 [2024-07-22 18:32:21.427195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.475 [2024-07-22 18:32:21.427206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.475 [2024-07-22 18:32:21.427218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.475 [2024-07-22 18:32:21.427230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.475 [2024-07-22 18:32:21.427247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.475 [2024-07-22 18:32:21.427259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:25:09.475 [2024-07-22 18:32:21.427269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.475 [2024-07-22 18:32:21.477618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.475 [2024-07-22 18:32:21.477692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.475 [2024-07-22 18:32:21.477715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.282 ms 00:25:09.475 [2024-07-22 18:32:21.477727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.475 [2024-07-22 18:32:21.477857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.475 [2024-07-22 18:32:21.477880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.475 [2024-07-22 18:32:21.477894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:09.475 [2024-07-22 18:32:21.477906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.734 [2024-07-22 18:32:21.520760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.734 [2024-07-22 18:32:21.520819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.735 [2024-07-22 18:32:21.520839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.759 ms 00:25:09.735 [2024-07-22 18:32:21.520850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.520925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.520941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.735 [2024-07-22 18:32:21.520955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.735 [2024-07-22 18:32:21.520966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.521584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.521609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.735 [2024-07-22 18:32:21.521623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:25:09.735 [2024-07-22 18:32:21.521634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.521837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.521858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.735 [2024-07-22 18:32:21.521870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:25:09.735 [2024-07-22 18:32:21.521881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.540386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.540443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.735 [2024-07-22 18:32:21.540461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.475 ms 00:25:09.735 [2024-07-22 18:32:21.540473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.557426] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:09.735 [2024-07-22 18:32:21.557475] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.735 [2024-07-22 18:32:21.557501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.557514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.735 [2024-07-22 18:32:21.557528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.870 ms 00:25:09.735 [2024-07-22 18:32:21.557539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.586971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.587052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:09.735 [2024-07-22 18:32:21.587080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.375 ms 00:25:09.735 [2024-07-22 18:32:21.587092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.603649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.603704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:09.735 [2024-07-22 18:32:21.603723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.480 ms 00:25:09.735 [2024-07-22 18:32:21.603735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.619154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.619201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:09.735 [2024-07-22 18:32:21.619218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.360 ms 00:25:09.735 [2024-07-22 18:32:21.619229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.620205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.620240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.735 [2024-07-22 18:32:21.620255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:25:09.735 [2024-07-22 18:32:21.620266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.698470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.698543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.735 [2024-07-22 18:32:21.698563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.177 ms 00:25:09.735 [2024-07-22 18:32:21.698575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.711687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.735 [2024-07-22 18:32:21.715995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.716029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.735 [2024-07-22 18:32:21.716046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.334 ms 00:25:09.735 [2024-07-22 18:32:21.716058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.716179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.716199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.735 [2024-07-22 18:32:21.716213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:09.735 [2024-07-22 18:32:21.716224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.718240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.718275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.735 [2024-07-22 18:32:21.718289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:25:09.735 [2024-07-22 18:32:21.718307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.718344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.718360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.735 [2024-07-22 18:32:21.718379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:09.735 [2024-07-22 18:32:21.718390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.735 [2024-07-22 18:32:21.718438] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.735 [2024-07-22 18:32:21.718455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.735 [2024-07-22 18:32:21.718466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.735 [2024-07-22 18:32:21.718483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:09.735 [2024-07-22 18:32:21.718494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.994 [2024-07-22 18:32:21.749751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.994 [2024-07-22 18:32:21.749799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.994 [2024-07-22 18:32:21.749816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.231 ms 00:25:09.994 [2024-07-22 18:32:21.749828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.994 [2024-07-22 18:32:21.749918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.994 [2024-07-22 18:32:21.749947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.994 [2024-07-22 18:32:21.749961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:09.994 [2024-07-22 18:32:21.749972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.994 [2024-07-22 18:32:21.757476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 364.441 ms, result 0 00:25:50.239  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (26 MBps) Copying: 101/1024 [MB] (26 MBps) Copying: 127/1024 [MB] (26 MBps) Copying: 153/1024 [MB] (25 MBps) Copying: 179/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (26 MBps) Copying: 230/1024 [MB] (24 MBps) Copying: 256/1024 [MB] (25 MBps) Copying: 283/1024 [MB] (26 MBps) Copying: 310/1024 [MB] (26 MBps) Copying: 336/1024 [MB] (26 MBps) Copying: 363/1024 [MB] (26 MBps) Copying: 389/1024 [MB] (26 MBps) Copying: 415/1024 [MB] (26 MBps) Copying: 442/1024 [MB] (26 MBps) Copying: 468/1024 [MB] (26 MBps) Copying: 494/1024 [MB] (25 MBps) Copying: 520/1024 [MB] (25 MBps) Copying: 546/1024 [MB] (26 MBps) Copying: 572/1024 [MB] (26 MBps) Copying: 599/1024 [MB] (26 MBps) Copying: 626/1024 [MB] (26 MBps) Copying: 649/1024 [MB] (23 MBps) Copying: 671/1024 [MB] (22 MBps) Copying: 693/1024 [MB] (21 MBps) Copying: 715/1024 [MB] (21 MBps) Copying: 738/1024 [MB] (23 MBps) Copying: 763/1024 [MB] (25 MBps) Copying: 790/1024 [MB] (26 MBps) Copying: 816/1024 [MB] (26 MBps) Copying: 840/1024 [MB] (24 MBps) Copying: 867/1024 [MB] (26 MBps) Copying: 893/1024 [MB] (25 MBps) Copying: 918/1024 [MB] (25 MBps) Copying: 944/1024 [MB] (26 MBps) Copying: 969/1024 [MB] (25 MBps) Copying: 995/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-22 18:33:01.973071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:01.973165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:50.239 [2024-07-22 18:33:01.973195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:50.239 [2024-07-22 18:33:01.973212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:01.973255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:50.239 [2024-07-22 18:33:01.980060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:01.980124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:50.239 [2024-07-22 18:33:01.980147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.766 ms 00:25:50.239 [2024-07-22 18:33:01.980163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:01.980515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:01.980555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:50.239 [2024-07-22 18:33:01.980580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:25:50.239 [2024-07-22 18:33:01.980595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:01.986004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:01.986045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:50.239 [2024-07-22 18:33:01.986069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.380 ms 00:25:50.239 [2024-07-22 18:33:01.986099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:01.992631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:01.992666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:50.239 [2024-07-22 18:33:01.992690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.491 ms 00:25:50.239 [2024-07-22 18:33:01.992703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:02.024456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:02.024518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:50.239 [2024-07-22 18:33:02.024537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.701 ms 00:25:50.239 [2024-07-22 18:33:02.024549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:02.042648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:02.042727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:50.239 [2024-07-22 18:33:02.042746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.033 ms 00:25:50.239 [2024-07-22 18:33:02.042767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:02.131022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:02.131110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:50.239 [2024-07-22 18:33:02.131132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.211 ms 00:25:50.239 [2024-07-22 18:33:02.131145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:02.163564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:02.163630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:50.239 [2024-07-22 18:33:02.163649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.393 ms 00:25:50.239 [2024-07-22 18:33:02.163662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:02.194344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:02.194417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:50.239 [2024-07-22 18:33:02.194435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.617 ms 00:25:50.239 [2024-07-22 18:33:02.194447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.239 [2024-07-22 18:33:02.225033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.239 [2024-07-22 18:33:02.225106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:50.239 [2024-07-22 18:33:02.225125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.529 ms 00:25:50.239 [2024-07-22 18:33:02.225158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.499 [2024-07-22 18:33:02.257634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.499 [2024-07-22 18:33:02.257717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:50.499 [2024-07-22 18:33:02.257737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.346 ms 00:25:50.499 [2024-07-22 18:33:02.257749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.499 [2024-07-22 18:33:02.257828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:50.499 [2024-07-22 18:33:02.257853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:25:50.499 [2024-07-22 18:33:02.257868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.257998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:50.499 [2024-07-22 18:33:02.258397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.258993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.259005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.259018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.259030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.259041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:50.500 [2024-07-22 18:33:02.259062] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:50.500 [2024-07-22 18:33:02.259080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84e247ec-79b0-422c-8ee5-87972b0ec164 00:25:50.500 [2024-07-22 18:33:02.259092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:25:50.500 [2024-07-22 18:33:02.259103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15808 00:25:50.500 [2024-07-22 18:33:02.259115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14848 00:25:50.500 [2024-07-22 18:33:02.259127] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0647 00:25:50.500 [2024-07-22 18:33:02.259138] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:50.500 [2024-07-22 18:33:02.259160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:50.500 [2024-07-22 18:33:02.259171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:50.500 [2024-07-22 18:33:02.259181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:50.500 [2024-07-22 18:33:02.259193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:50.500 [2024-07-22 18:33:02.259204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.500 [2024-07-22 18:33:02.259220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:50.500 [2024-07-22 18:33:02.259232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:25:50.500 [2024-07-22 18:33:02.259243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.500 [2024-07-22 18:33:02.276823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.500 [2024-07-22 18:33:02.276893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:50.500 [2024-07-22 18:33:02.276913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.521 ms 00:25:50.500 [2024-07-22 18:33:02.276946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.500 [2024-07-22 18:33:02.277438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.500 [2024-07-22 18:33:02.277467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:50.500 [2024-07-22 18:33:02.277481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:25:50.500 [2024-07-22 18:33:02.277493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.500 [2024-07-22 18:33:02.315531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.500 [2024-07-22 18:33:02.315605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.500 [2024-07-22 18:33:02.315624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.500 [2024-07-22 18:33:02.315635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.500 [2024-07-22 18:33:02.315740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.500 [2024-07-22 18:33:02.315757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.500 [2024-07-22 18:33:02.315769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.500 [2024-07-22 18:33:02.315780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.500 [2024-07-22 18:33:02.315873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.500 [2024-07-22 18:33:02.315892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.500 [2024-07-22 18:33:02.315905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.500 [2024-07-22 18:33:02.315916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.500 [2024-07-22 18:33:02.315944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.500 [2024-07-22 18:33:02.315958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.500 [2024-07-22 18:33:02.315969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.500 [2024-07-22 18:33:02.315980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.501 [2024-07-22 18:33:02.452996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.501 [2024-07-22 18:33:02.453086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.501 [2024-07-22 18:33:02.453110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.501 [2024-07-22 18:33:02.453124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.564585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.759 [2024-07-22 18:33:02.564665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.759 [2024-07-22 18:33:02.564708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.759 [2024-07-22 18:33:02.564725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.564829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.759 [2024-07-22 18:33:02.564851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.759 [2024-07-22 18:33:02.564867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.759 [2024-07-22 18:33:02.564881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.564938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.759 [2024-07-22 18:33:02.564972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:50.759 [2024-07-22 18:33:02.564987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.759 [2024-07-22 18:33:02.565000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.565145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.759 [2024-07-22 18:33:02.565179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:50.759 [2024-07-22 18:33:02.565196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.759 [2024-07-22 18:33:02.565210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.565280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.759 [2024-07-22 18:33:02.565301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:50.759 [2024-07-22 18:33:02.565324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.759 [2024-07-22 18:33:02.565337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.565400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.759 [2024-07-22 18:33:02.565416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:50.759 [2024-07-22 18:33:02.565431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.759 [2024-07-22 18:33:02.565444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.565505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.759 [2024-07-22 18:33:02.565532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:50.759 [2024-07-22 18:33:02.565547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.759 [2024-07-22 18:33:02.565560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.759 [2024-07-22 18:33:02.565753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 592.638 ms, result 0 00:25:51.812 00:25:51.812 00:25:51.812 18:33:03 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:54.350 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:54.350 18:33:05 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:54.350 18:33:05 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:54.350 18:33:05 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 81523 00:25:54.350 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81523 ']' 00:25:54.350 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81523 00:25:54.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81523) - No such process 00:25:54.350 Process with pid 81523 is not found 00:25:54.350 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 81523 is not found' 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:54.350 Remove shared memory files 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:54.350 18:33:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:54.350 00:25:54.350 real 3m18.983s 00:25:54.350 user 3m4.635s 00:25:54.350 sys 0m16.616s 00:25:54.350 ************************************ 00:25:54.350 END TEST ftl_restore 00:25:54.350 ************************************ 00:25:54.350 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:54.350 18:33:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:54.350 18:33:06 ftl -- common/autotest_common.sh@1142 -- # return 0 00:25:54.350 18:33:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:54.350 18:33:06 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:54.350 18:33:06 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.350 18:33:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:54.350 ************************************ 00:25:54.350 START TEST ftl_dirty_shutdown 00:25:54.350 ************************************ 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:54.350 * Looking for test storage... 00:25:54.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:54.350 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:54.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83580 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83580 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83580 ']' 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.351 18:33:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:54.611 [2024-07-22 18:33:06.411655] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:54.611 [2024-07-22 18:33:06.411826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83580 ] 00:25:54.611 [2024-07-22 18:33:06.579607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.869 [2024-07-22 18:33:06.838236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:55.803 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:56.061 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:56.061 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:56.061 18:33:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:56.062 18:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:56.062 18:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:56.062 18:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:56.062 18:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:56.062 18:33:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:56.320 { 00:25:56.320 "name": "nvme0n1", 00:25:56.320 "aliases": [ 00:25:56.320 "14f65c29-2596-4075-824b-acbdf5d65767" 00:25:56.320 ], 00:25:56.320 "product_name": "NVMe disk", 00:25:56.320 "block_size": 4096, 00:25:56.320 "num_blocks": 1310720, 00:25:56.320 "uuid": "14f65c29-2596-4075-824b-acbdf5d65767", 00:25:56.320 "assigned_rate_limits": { 00:25:56.320 "rw_ios_per_sec": 0, 00:25:56.320 "rw_mbytes_per_sec": 0, 00:25:56.320 "r_mbytes_per_sec": 0, 00:25:56.320 "w_mbytes_per_sec": 0 00:25:56.320 }, 00:25:56.320 "claimed": true, 00:25:56.320 "claim_type": "read_many_write_one", 00:25:56.320 "zoned": false, 00:25:56.320 "supported_io_types": { 00:25:56.320 "read": true, 00:25:56.320 "write": true, 00:25:56.320 "unmap": true, 00:25:56.320 "flush": true, 00:25:56.320 "reset": true, 00:25:56.320 "nvme_admin": true, 00:25:56.320 "nvme_io": true, 00:25:56.320 "nvme_io_md": false, 00:25:56.320 "write_zeroes": true, 00:25:56.320 "zcopy": false, 00:25:56.320 "get_zone_info": false, 00:25:56.320 "zone_management": false, 00:25:56.320 "zone_append": false, 00:25:56.320 "compare": true, 00:25:56.320 "compare_and_write": false, 00:25:56.320 "abort": true, 00:25:56.320 "seek_hole": false, 00:25:56.320 "seek_data": false, 00:25:56.320 "copy": true, 00:25:56.320 "nvme_iov_md": false 00:25:56.320 }, 00:25:56.320 "driver_specific": { 00:25:56.320 "nvme": [ 00:25:56.320 { 00:25:56.320 "pci_address": "0000:00:11.0", 00:25:56.320 "trid": { 00:25:56.320 "trtype": "PCIe", 00:25:56.320 "traddr": "0000:00:11.0" 00:25:56.320 }, 00:25:56.320 "ctrlr_data": { 00:25:56.320 "cntlid": 0, 00:25:56.320 "vendor_id": "0x1b36", 00:25:56.320 "model_number": "QEMU NVMe Ctrl", 00:25:56.320 "serial_number": "12341", 00:25:56.320 "firmware_revision": "8.0.0", 00:25:56.320 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:56.320 "oacs": { 00:25:56.320 "security": 0, 00:25:56.320 "format": 1, 00:25:56.320 "firmware": 0, 00:25:56.320 "ns_manage": 1 00:25:56.320 }, 00:25:56.320 "multi_ctrlr": false, 00:25:56.320 "ana_reporting": false 00:25:56.320 }, 00:25:56.320 "vs": { 00:25:56.320 "nvme_version": "1.4" 00:25:56.320 }, 00:25:56.320 "ns_data": { 00:25:56.320 "id": 1, 00:25:56.320 "can_share": false 00:25:56.320 } 00:25:56.320 } 00:25:56.320 ], 00:25:56.320 "mp_policy": "active_passive" 00:25:56.320 } 00:25:56.320 } 00:25:56.320 ]' 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:56.320 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:56.578 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=78f0f95e-37bf-4cc2-85b6-7bf5a3524ec3 00:25:56.578 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:56.578 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78f0f95e-37bf-4cc2-85b6-7bf5a3524ec3 00:25:57.146 18:33:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:57.146 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7d231a1f-2b82-4578-983c-9fc88c35f314 00:25:57.146 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7d231a1f-2b82-4578-983c-9fc88c35f314 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:57.404 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:57.663 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:57.663 { 00:25:57.663 "name": "02b3b71f-a0b8-4a42-9c63-96d8f634b5d1", 00:25:57.663 "aliases": [ 00:25:57.663 "lvs/nvme0n1p0" 00:25:57.663 ], 00:25:57.663 "product_name": "Logical Volume", 00:25:57.663 "block_size": 4096, 00:25:57.663 "num_blocks": 26476544, 00:25:57.663 "uuid": "02b3b71f-a0b8-4a42-9c63-96d8f634b5d1", 00:25:57.663 "assigned_rate_limits": { 00:25:57.663 "rw_ios_per_sec": 0, 00:25:57.663 "rw_mbytes_per_sec": 0, 00:25:57.663 "r_mbytes_per_sec": 0, 00:25:57.663 "w_mbytes_per_sec": 0 00:25:57.663 }, 00:25:57.663 "claimed": false, 00:25:57.663 "zoned": false, 00:25:57.663 "supported_io_types": { 00:25:57.663 "read": true, 00:25:57.663 "write": true, 00:25:57.663 "unmap": true, 00:25:57.663 "flush": false, 00:25:57.663 "reset": true, 00:25:57.663 "nvme_admin": false, 00:25:57.663 "nvme_io": false, 00:25:57.663 "nvme_io_md": false, 00:25:57.663 "write_zeroes": true, 00:25:57.663 "zcopy": false, 00:25:57.663 "get_zone_info": false, 00:25:57.663 "zone_management": false, 00:25:57.663 "zone_append": false, 00:25:57.663 "compare": false, 00:25:57.663 "compare_and_write": false, 00:25:57.663 "abort": false, 00:25:57.663 "seek_hole": true, 00:25:57.663 "seek_data": true, 00:25:57.663 "copy": false, 00:25:57.663 "nvme_iov_md": false 00:25:57.663 }, 00:25:57.663 "driver_specific": { 00:25:57.663 "lvol": { 00:25:57.663 "lvol_store_uuid": "7d231a1f-2b82-4578-983c-9fc88c35f314", 00:25:57.663 "base_bdev": "nvme0n1", 00:25:57.663 "thin_provision": true, 00:25:57.663 "num_allocated_clusters": 0, 00:25:57.663 "snapshot": false, 00:25:57.663 "clone": false, 00:25:57.663 "esnap_clone": false 00:25:57.663 } 00:25:57.663 } 00:25:57.663 } 00:25:57.663 ]' 00:25:57.663 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:57.922 18:33:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:58.181 18:33:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:58.182 18:33:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:58.182 18:33:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:58.182 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:58.182 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:58.182 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:58.182 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:58.182 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:58.441 { 00:25:58.441 "name": "02b3b71f-a0b8-4a42-9c63-96d8f634b5d1", 00:25:58.441 "aliases": [ 00:25:58.441 "lvs/nvme0n1p0" 00:25:58.441 ], 00:25:58.441 "product_name": "Logical Volume", 00:25:58.441 "block_size": 4096, 00:25:58.441 "num_blocks": 26476544, 00:25:58.441 "uuid": "02b3b71f-a0b8-4a42-9c63-96d8f634b5d1", 00:25:58.441 "assigned_rate_limits": { 00:25:58.441 "rw_ios_per_sec": 0, 00:25:58.441 "rw_mbytes_per_sec": 0, 00:25:58.441 "r_mbytes_per_sec": 0, 00:25:58.441 "w_mbytes_per_sec": 0 00:25:58.441 }, 00:25:58.441 "claimed": false, 00:25:58.441 "zoned": false, 00:25:58.441 "supported_io_types": { 00:25:58.441 "read": true, 00:25:58.441 "write": true, 00:25:58.441 "unmap": true, 00:25:58.441 "flush": false, 00:25:58.441 "reset": true, 00:25:58.441 "nvme_admin": false, 00:25:58.441 "nvme_io": false, 00:25:58.441 "nvme_io_md": false, 00:25:58.441 "write_zeroes": true, 00:25:58.441 "zcopy": false, 00:25:58.441 "get_zone_info": false, 00:25:58.441 "zone_management": false, 00:25:58.441 "zone_append": false, 00:25:58.441 "compare": false, 00:25:58.441 "compare_and_write": false, 00:25:58.441 "abort": false, 00:25:58.441 "seek_hole": true, 00:25:58.441 "seek_data": true, 00:25:58.441 "copy": false, 00:25:58.441 "nvme_iov_md": false 00:25:58.441 }, 00:25:58.441 "driver_specific": { 00:25:58.441 "lvol": { 00:25:58.441 "lvol_store_uuid": "7d231a1f-2b82-4578-983c-9fc88c35f314", 00:25:58.441 "base_bdev": "nvme0n1", 00:25:58.441 "thin_provision": true, 00:25:58.441 "num_allocated_clusters": 0, 00:25:58.441 "snapshot": false, 00:25:58.441 "clone": false, 00:25:58.441 "esnap_clone": false 00:25:58.441 } 00:25:58.441 } 00:25:58.441 } 00:25:58.441 ]' 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:58.441 18:33:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:58.701 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:58.701 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:58.701 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:58.701 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:58.701 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:58.701 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:58.701 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:58.960 { 00:25:58.960 "name": "02b3b71f-a0b8-4a42-9c63-96d8f634b5d1", 00:25:58.960 "aliases": [ 00:25:58.960 "lvs/nvme0n1p0" 00:25:58.960 ], 00:25:58.960 "product_name": "Logical Volume", 00:25:58.960 "block_size": 4096, 00:25:58.960 "num_blocks": 26476544, 00:25:58.960 "uuid": "02b3b71f-a0b8-4a42-9c63-96d8f634b5d1", 00:25:58.960 "assigned_rate_limits": { 00:25:58.960 "rw_ios_per_sec": 0, 00:25:58.960 "rw_mbytes_per_sec": 0, 00:25:58.960 "r_mbytes_per_sec": 0, 00:25:58.960 "w_mbytes_per_sec": 0 00:25:58.960 }, 00:25:58.960 "claimed": false, 00:25:58.960 "zoned": false, 00:25:58.960 "supported_io_types": { 00:25:58.960 "read": true, 00:25:58.960 "write": true, 00:25:58.960 "unmap": true, 00:25:58.960 "flush": false, 00:25:58.960 "reset": true, 00:25:58.960 "nvme_admin": false, 00:25:58.960 "nvme_io": false, 00:25:58.960 "nvme_io_md": false, 00:25:58.960 "write_zeroes": true, 00:25:58.960 "zcopy": false, 00:25:58.960 "get_zone_info": false, 00:25:58.960 "zone_management": false, 00:25:58.960 "zone_append": false, 00:25:58.960 "compare": false, 00:25:58.960 "compare_and_write": false, 00:25:58.960 "abort": false, 00:25:58.960 "seek_hole": true, 00:25:58.960 "seek_data": true, 00:25:58.960 "copy": false, 00:25:58.960 "nvme_iov_md": false 00:25:58.960 }, 00:25:58.960 "driver_specific": { 00:25:58.960 "lvol": { 00:25:58.960 "lvol_store_uuid": "7d231a1f-2b82-4578-983c-9fc88c35f314", 00:25:58.960 "base_bdev": "nvme0n1", 00:25:58.960 "thin_provision": true, 00:25:58.960 "num_allocated_clusters": 0, 00:25:58.960 "snapshot": false, 00:25:58.960 "clone": false, 00:25:58.960 "esnap_clone": false 00:25:58.960 } 00:25:58.960 } 00:25:58.960 } 00:25:58.960 ]' 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 --l2p_dram_limit 10' 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:58.960 18:33:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 02b3b71f-a0b8-4a42-9c63-96d8f634b5d1 --l2p_dram_limit 10 -c nvc0n1p0 00:25:59.220 [2024-07-22 18:33:11.177856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.177926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:59.220 [2024-07-22 18:33:11.177949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:59.220 [2024-07-22 18:33:11.177964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.178045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.178067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.220 [2024-07-22 18:33:11.178081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:59.220 [2024-07-22 18:33:11.178095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.178126] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:59.220 [2024-07-22 18:33:11.179134] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:59.220 [2024-07-22 18:33:11.179165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.179185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.220 [2024-07-22 18:33:11.179199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:25:59.220 [2024-07-22 18:33:11.179214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.179383] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 868c9ea1-f154-4d00-9b60-61655c3bc5e0 00:25:59.220 [2024-07-22 18:33:11.181157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.181197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:59.220 [2024-07-22 18:33:11.181217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:59.220 [2024-07-22 18:33:11.181230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.191186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.191248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.220 [2024-07-22 18:33:11.191271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.886 ms 00:25:59.220 [2024-07-22 18:33:11.191284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.191445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.191467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.220 [2024-07-22 18:33:11.191484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:59.220 [2024-07-22 18:33:11.191496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.191601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.191620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:59.220 [2024-07-22 18:33:11.191635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:59.220 [2024-07-22 18:33:11.191653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.191715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:59.220 [2024-07-22 18:33:11.196902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.196969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.220 [2024-07-22 18:33:11.196987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.223 ms 00:25:59.220 [2024-07-22 18:33:11.197003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.197053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.197074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:59.220 [2024-07-22 18:33:11.197087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:59.220 [2024-07-22 18:33:11.197101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.197145] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:59.220 [2024-07-22 18:33:11.197315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:59.220 [2024-07-22 18:33:11.197336] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:59.220 [2024-07-22 18:33:11.197358] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:59.220 [2024-07-22 18:33:11.197374] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:59.220 [2024-07-22 18:33:11.197390] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:59.220 [2024-07-22 18:33:11.197403] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:59.220 [2024-07-22 18:33:11.197417] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:59.220 [2024-07-22 18:33:11.197431] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:59.220 [2024-07-22 18:33:11.197447] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:59.220 [2024-07-22 18:33:11.197459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.197473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:59.220 [2024-07-22 18:33:11.197486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:59.220 [2024-07-22 18:33:11.197500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.197592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.220 [2024-07-22 18:33:11.197612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:59.220 [2024-07-22 18:33:11.197625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:59.220 [2024-07-22 18:33:11.197639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.220 [2024-07-22 18:33:11.197785] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:59.221 [2024-07-22 18:33:11.197812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:59.221 [2024-07-22 18:33:11.197837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.221 [2024-07-22 18:33:11.197853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.221 [2024-07-22 18:33:11.197865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:59.221 [2024-07-22 18:33:11.197878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:59.221 [2024-07-22 18:33:11.197889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:59.221 [2024-07-22 18:33:11.197903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:59.221 [2024-07-22 18:33:11.197913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:59.221 [2024-07-22 18:33:11.197931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.221 [2024-07-22 18:33:11.197941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:59.221 [2024-07-22 18:33:11.197954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:59.221 [2024-07-22 18:33:11.197964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.221 [2024-07-22 18:33:11.197979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:59.221 [2024-07-22 18:33:11.197990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:59.221 [2024-07-22 18:33:11.198003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:59.221 [2024-07-22 18:33:11.198029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:59.221 [2024-07-22 18:33:11.198041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:59.221 [2024-07-22 18:33:11.198065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.221 [2024-07-22 18:33:11.198089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:59.221 [2024-07-22 18:33:11.198103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.221 [2024-07-22 18:33:11.198126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:59.221 [2024-07-22 18:33:11.198137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.221 [2024-07-22 18:33:11.198160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:59.221 [2024-07-22 18:33:11.198173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.221 [2024-07-22 18:33:11.198206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:59.221 [2024-07-22 18:33:11.198217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.221 [2024-07-22 18:33:11.198243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:59.221 [2024-07-22 18:33:11.198256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:59.221 [2024-07-22 18:33:11.198267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.221 [2024-07-22 18:33:11.198280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:59.221 [2024-07-22 18:33:11.198291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:59.221 [2024-07-22 18:33:11.198305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:59.221 [2024-07-22 18:33:11.198329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:59.221 [2024-07-22 18:33:11.198340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198352] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:59.221 [2024-07-22 18:33:11.198364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:59.221 [2024-07-22 18:33:11.198378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.221 [2024-07-22 18:33:11.198390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.221 [2024-07-22 18:33:11.198404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:59.221 [2024-07-22 18:33:11.198415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:59.221 [2024-07-22 18:33:11.198431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:59.221 [2024-07-22 18:33:11.198442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:59.221 [2024-07-22 18:33:11.198463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:59.221 [2024-07-22 18:33:11.198475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:59.221 [2024-07-22 18:33:11.198504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:59.221 [2024-07-22 18:33:11.198518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.221 [2024-07-22 18:33:11.198537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:59.221 [2024-07-22 18:33:11.198550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:59.221 [2024-07-22 18:33:11.198564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:59.221 [2024-07-22 18:33:11.198586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:59.221 [2024-07-22 18:33:11.198600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:59.221 [2024-07-22 18:33:11.198612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:59.221 [2024-07-22 18:33:11.198625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:59.221 [2024-07-22 18:33:11.198637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:59.221 [2024-07-22 18:33:11.198652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:59.221 [2024-07-22 18:33:11.198667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:59.221 [2024-07-22 18:33:11.198698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:59.221 [2024-07-22 18:33:11.198712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:59.221 [2024-07-22 18:33:11.198726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:59.221 [2024-07-22 18:33:11.198738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:59.221 [2024-07-22 18:33:11.198752] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:59.221 [2024-07-22 18:33:11.198765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.221 [2024-07-22 18:33:11.198780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:59.221 [2024-07-22 18:33:11.198792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:59.221 [2024-07-22 18:33:11.198806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:59.221 [2024-07-22 18:33:11.198818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:59.221 [2024-07-22 18:33:11.198832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.221 [2024-07-22 18:33:11.198844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:59.221 [2024-07-22 18:33:11.198859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:25:59.221 [2024-07-22 18:33:11.198874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.221 [2024-07-22 18:33:11.198936] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:59.221 [2024-07-22 18:33:11.198959] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:02.508 [2024-07-22 18:33:13.969828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:13.969946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:02.508 [2024-07-22 18:33:13.969983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2770.865 ms 00:26:02.508 [2024-07-22 18:33:13.970001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.023012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.023107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:02.508 [2024-07-22 18:33:14.023139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.555 ms 00:26:02.508 [2024-07-22 18:33:14.023156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.023438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.023465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:02.508 [2024-07-22 18:33:14.023487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:02.508 [2024-07-22 18:33:14.023508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.079929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.080014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.508 [2024-07-22 18:33:14.080046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.337 ms 00:26:02.508 [2024-07-22 18:33:14.080062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.080157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.080190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.508 [2024-07-22 18:33:14.080210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:02.508 [2024-07-22 18:33:14.080225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.081109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.081146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.508 [2024-07-22 18:33:14.081169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:26:02.508 [2024-07-22 18:33:14.081185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.081411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.081443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.508 [2024-07-22 18:33:14.081469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:26:02.508 [2024-07-22 18:33:14.081485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.108188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.108258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.508 [2024-07-22 18:33:14.108284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.652 ms 00:26:02.508 [2024-07-22 18:33:14.108301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.126976] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:02.508 [2024-07-22 18:33:14.132608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.132658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:02.508 [2024-07-22 18:33:14.132702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.141 ms 00:26:02.508 [2024-07-22 18:33:14.132726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.212376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.212491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:02.508 [2024-07-22 18:33:14.212518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.586 ms 00:26:02.508 [2024-07-22 18:33:14.212535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.212831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.212864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:02.508 [2024-07-22 18:33:14.212879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:26:02.508 [2024-07-22 18:33:14.212898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.243375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.243435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:02.508 [2024-07-22 18:33:14.243456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.391 ms 00:26:02.508 [2024-07-22 18:33:14.243472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.273283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.273337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:02.508 [2024-07-22 18:33:14.273358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.754 ms 00:26:02.508 [2024-07-22 18:33:14.273373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.274319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.274361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:02.508 [2024-07-22 18:33:14.274377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:26:02.508 [2024-07-22 18:33:14.274397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.370447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.370537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:02.508 [2024-07-22 18:33:14.370560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.976 ms 00:26:02.508 [2024-07-22 18:33:14.370582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.403472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.403546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:02.508 [2024-07-22 18:33:14.403569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.824 ms 00:26:02.508 [2024-07-22 18:33:14.403586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.434443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.434508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:02.508 [2024-07-22 18:33:14.434529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.790 ms 00:26:02.508 [2024-07-22 18:33:14.434545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.475932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.476016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:02.508 [2024-07-22 18:33:14.476044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.324 ms 00:26:02.508 [2024-07-22 18:33:14.476063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.508 [2024-07-22 18:33:14.476168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.508 [2024-07-22 18:33:14.476199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:02.508 [2024-07-22 18:33:14.476217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:02.508 [2024-07-22 18:33:14.476239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.509 [2024-07-22 18:33:14.476399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.509 [2024-07-22 18:33:14.476431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:02.509 [2024-07-22 18:33:14.476452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:02.509 [2024-07-22 18:33:14.476469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.509 [2024-07-22 18:33:14.477975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3299.469 ms, result 0 00:26:02.509 { 00:26:02.509 "name": "ftl0", 00:26:02.509 "uuid": "868c9ea1-f154-4d00-9b60-61655c3bc5e0" 00:26:02.509 } 00:26:02.509 18:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:02.509 18:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:03.077 18:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:03.077 18:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:03.077 18:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:03.077 /dev/nbd0 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:03.077 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:03.077 1+0 records in 00:26:03.077 1+0 records out 00:26:03.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00258748 s, 1.6 MB/s 00:26:03.335 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:03.335 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:26:03.335 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:03.335 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:03.335 18:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:26:03.335 18:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:03.335 [2024-07-22 18:33:15.197452] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:03.335 [2024-07-22 18:33:15.197627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83722 ] 00:26:03.635 [2024-07-22 18:33:15.372834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.894 [2024-07-22 18:33:15.651315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.928  Copying: 156/1024 [MB] (156 MBps) Copying: 292/1024 [MB] (135 MBps) Copying: 456/1024 [MB] (164 MBps) Copying: 628/1024 [MB] (172 MBps) Copying: 795/1024 [MB] (166 MBps) Copying: 960/1024 [MB] (165 MBps) Copying: 1024/1024 [MB] (average 160 MBps) 00:26:11.928 00:26:11.928 18:33:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:14.462 18:33:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:14.462 [2024-07-22 18:33:25.999136] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:14.462 [2024-07-22 18:33:25.999381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83832 ] 00:26:14.462 [2024-07-22 18:33:26.172582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.462 [2024-07-22 18:33:26.428116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.321  Copying: 16/1024 [MB] (16 MBps) Copying: 32/1024 [MB] (16 MBps) Copying: 47/1024 [MB] (15 MBps) Copying: 64/1024 [MB] (16 MBps) Copying: 79/1024 [MB] (15 MBps) Copying: 95/1024 [MB] (15 MBps) Copying: 111/1024 [MB] (15 MBps) Copying: 127/1024 [MB] (15 MBps) Copying: 142/1024 [MB] (15 MBps) Copying: 158/1024 [MB] (15 MBps) Copying: 172/1024 [MB] (14 MBps) Copying: 186/1024 [MB] (14 MBps) Copying: 201/1024 [MB] (14 MBps) Copying: 217/1024 [MB] (15 MBps) Copying: 233/1024 [MB] (16 MBps) Copying: 248/1024 [MB] (15 MBps) Copying: 263/1024 [MB] (14 MBps) Copying: 279/1024 [MB] (15 MBps) Copying: 294/1024 [MB] (15 MBps) Copying: 310/1024 [MB] (15 MBps) Copying: 326/1024 [MB] (15 MBps) Copying: 342/1024 [MB] (15 MBps) Copying: 357/1024 [MB] (15 MBps) Copying: 373/1024 [MB] (15 MBps) Copying: 388/1024 [MB] (15 MBps) Copying: 404/1024 [MB] (15 MBps) Copying: 419/1024 [MB] (15 MBps) Copying: 436/1024 [MB] (16 MBps) Copying: 452/1024 [MB] (16 MBps) Copying: 468/1024 [MB] (16 MBps) Copying: 483/1024 [MB] (15 MBps) Copying: 498/1024 [MB] (15 MBps) Copying: 514/1024 [MB] (15 MBps) Copying: 530/1024 [MB] (15 MBps) Copying: 545/1024 [MB] (15 MBps) Copying: 561/1024 [MB] (15 MBps) Copying: 576/1024 [MB] (15 MBps) Copying: 591/1024 [MB] (15 MBps) Copying: 606/1024 [MB] (15 MBps) Copying: 620/1024 [MB] (14 MBps) Copying: 635/1024 [MB] (14 MBps) Copying: 650/1024 [MB] (14 MBps) Copying: 665/1024 [MB] (14 MBps) Copying: 679/1024 [MB] (14 MBps) Copying: 694/1024 [MB] (15 MBps) Copying: 708/1024 [MB] (14 MBps) Copying: 724/1024 [MB] (15 MBps) Copying: 739/1024 [MB] (15 MBps) Copying: 755/1024 [MB] (16 MBps) Copying: 771/1024 [MB] (15 MBps) Copying: 787/1024 [MB] (15 MBps) Copying: 803/1024 [MB] (15 MBps) Copying: 818/1024 [MB] (15 MBps) Copying: 834/1024 [MB] (15 MBps) Copying: 849/1024 [MB] (15 MBps) Copying: 865/1024 [MB] (15 MBps) Copying: 880/1024 [MB] (15 MBps) Copying: 896/1024 [MB] (15 MBps) Copying: 911/1024 [MB] (15 MBps) Copying: 927/1024 [MB] (15 MBps) Copying: 943/1024 [MB] (15 MBps) Copying: 959/1024 [MB] (15 MBps) Copying: 974/1024 [MB] (15 MBps) Copying: 989/1024 [MB] (15 MBps) Copying: 1005/1024 [MB] (15 MBps) Copying: 1021/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:27:22.321 00:27:22.321 18:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:22.321 18:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:22.582 18:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:22.842 [2024-07-22 18:34:34.641547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.842 [2024-07-22 18:34:34.641640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:22.842 [2024-07-22 18:34:34.641678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.842 [2024-07-22 18:34:34.641713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.842 [2024-07-22 18:34:34.641802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.842 [2024-07-22 18:34:34.645556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.842 [2024-07-22 18:34:34.645598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:22.842 [2024-07-22 18:34:34.645615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:27:22.842 [2024-07-22 18:34:34.645632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.842 [2024-07-22 18:34:34.647553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.842 [2024-07-22 18:34:34.647608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:22.842 [2024-07-22 18:34:34.647626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:27:22.842 [2024-07-22 18:34:34.647641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.842 [2024-07-22 18:34:34.665431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.842 [2024-07-22 18:34:34.665486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:22.842 [2024-07-22 18:34:34.665507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:27:22.842 [2024-07-22 18:34:34.665521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.842 [2024-07-22 18:34:34.672148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.672220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:22.843 [2024-07-22 18:34:34.672236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.580 ms 00:27:22.843 [2024-07-22 18:34:34.672250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.703783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.703845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:22.843 [2024-07-22 18:34:34.703863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.438 ms 00:27:22.843 [2024-07-22 18:34:34.703878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.722877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.722948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:22.843 [2024-07-22 18:34:34.722973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.937 ms 00:27:22.843 [2024-07-22 18:34:34.722996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.723220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.723250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:22.843 [2024-07-22 18:34:34.723266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:27:22.843 [2024-07-22 18:34:34.723281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.754394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.754455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:22.843 [2024-07-22 18:34:34.754477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.086 ms 00:27:22.843 [2024-07-22 18:34:34.754492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.784940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.784996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:22.843 [2024-07-22 18:34:34.785016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.384 ms 00:27:22.843 [2024-07-22 18:34:34.785030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.814938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.815026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:22.843 [2024-07-22 18:34:34.815047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.854 ms 00:27:22.843 [2024-07-22 18:34:34.815061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.844846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.843 [2024-07-22 18:34:34.844909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:22.843 [2024-07-22 18:34:34.844930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.652 ms 00:27:22.843 [2024-07-22 18:34:34.844945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.843 [2024-07-22 18:34:34.845007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:22.843 [2024-07-22 18:34:34.845040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:22.843 [2024-07-22 18:34:34.845976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.845991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:22.844 [2024-07-22 18:34:34.846511] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:22.844 [2024-07-22 18:34:34.846524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 868c9ea1-f154-4d00-9b60-61655c3bc5e0 00:27:22.844 [2024-07-22 18:34:34.846539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:22.844 [2024-07-22 18:34:34.846550] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:22.844 [2024-07-22 18:34:34.846574] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:22.844 [2024-07-22 18:34:34.846586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:22.844 [2024-07-22 18:34:34.846599] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:22.844 [2024-07-22 18:34:34.846611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:22.844 [2024-07-22 18:34:34.846624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:22.844 [2024-07-22 18:34:34.846635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:22.844 [2024-07-22 18:34:34.846648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:22.844 [2024-07-22 18:34:34.846659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.844 [2024-07-22 18:34:34.846691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:22.844 [2024-07-22 18:34:34.846706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:27:22.844 [2024-07-22 18:34:34.846721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.103 [2024-07-22 18:34:34.865303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.103 [2024-07-22 18:34:34.865378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:23.103 [2024-07-22 18:34:34.865405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.495 ms 00:27:23.103 [2024-07-22 18:34:34.865420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.103 [2024-07-22 18:34:34.865973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.103 [2024-07-22 18:34:34.866027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:23.103 [2024-07-22 18:34:34.866043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:27:23.103 [2024-07-22 18:34:34.866058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.103 [2024-07-22 18:34:34.919766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.103 [2024-07-22 18:34:34.919849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:23.103 [2024-07-22 18:34:34.919868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.103 [2024-07-22 18:34:34.919884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.103 [2024-07-22 18:34:34.919982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.103 [2024-07-22 18:34:34.920003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:23.103 [2024-07-22 18:34:34.920016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.103 [2024-07-22 18:34:34.920030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.103 [2024-07-22 18:34:34.920156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.103 [2024-07-22 18:34:34.920185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:23.103 [2024-07-22 18:34:34.920206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.103 [2024-07-22 18:34:34.920220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.103 [2024-07-22 18:34:34.920248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:34.920269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:23.104 [2024-07-22 18:34:34.920282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:34.920296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.028455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.028530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:23.104 [2024-07-22 18:34:35.028550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.028566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.114896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.114975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:23.104 [2024-07-22 18:34:35.114996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.115011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.115133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.115157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:23.104 [2024-07-22 18:34:35.115176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.115190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.115267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.115306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:23.104 [2024-07-22 18:34:35.115321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.115335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.115500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.115534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:23.104 [2024-07-22 18:34:35.115549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.115566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.115624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.115664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:23.104 [2024-07-22 18:34:35.115697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.115716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.115774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.115801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:23.104 [2024-07-22 18:34:35.115823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.115848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.115915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.104 [2024-07-22 18:34:35.115953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:23.104 [2024-07-22 18:34:35.115967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.104 [2024-07-22 18:34:35.115981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.104 [2024-07-22 18:34:35.116162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.573 ms, result 0 00:27:23.363 true 00:27:23.363 18:34:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83580 00:27:23.363 18:34:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83580 00:27:23.363 18:34:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:23.363 [2024-07-22 18:34:35.228099] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:23.363 [2024-07-22 18:34:35.228312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84517 ] 00:27:23.622 [2024-07-22 18:34:35.394077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.622 [2024-07-22 18:34:35.635425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.574  Copying: 167/1024 [MB] (167 MBps) Copying: 335/1024 [MB] (167 MBps) Copying: 501/1024 [MB] (166 MBps) Copying: 668/1024 [MB] (167 MBps) Copying: 837/1024 [MB] (168 MBps) Copying: 1005/1024 [MB] (167 MBps) Copying: 1024/1024 [MB] (average 167 MBps) 00:27:31.574 00:27:31.574 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83580 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:31.574 18:34:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:31.574 [2024-07-22 18:34:43.363304] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:31.574 [2024-07-22 18:34:43.363494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84603 ] 00:27:31.574 [2024-07-22 18:34:43.527676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.832 [2024-07-22 18:34:43.765497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.397 [2024-07-22 18:34:44.130021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.397 [2024-07-22 18:34:44.130104] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.397 [2024-07-22 18:34:44.197875] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:32.397 [2024-07-22 18:34:44.198279] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:32.397 [2024-07-22 18:34:44.198551] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:32.656 [2024-07-22 18:34:44.465806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.465892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:32.656 [2024-07-22 18:34:44.465912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:32.656 [2024-07-22 18:34:44.465928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.466034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.466056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.656 [2024-07-22 18:34:44.466069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:32.656 [2024-07-22 18:34:44.466083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.466122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:32.656 [2024-07-22 18:34:44.467162] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:32.656 [2024-07-22 18:34:44.467210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.467224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.656 [2024-07-22 18:34:44.467236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:27:32.656 [2024-07-22 18:34:44.467246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.469338] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:32.656 [2024-07-22 18:34:44.486594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.486688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:32.656 [2024-07-22 18:34:44.486708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.262 ms 00:27:32.656 [2024-07-22 18:34:44.486727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.486808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.486826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:32.656 [2024-07-22 18:34:44.486839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:32.656 [2024-07-22 18:34:44.486849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.496353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.496484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.656 [2024-07-22 18:34:44.496511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.357 ms 00:27:32.656 [2024-07-22 18:34:44.496523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.496661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.496680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.656 [2024-07-22 18:34:44.496722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:32.656 [2024-07-22 18:34:44.496737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.496836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.496854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:32.656 [2024-07-22 18:34:44.496867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:32.656 [2024-07-22 18:34:44.496883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.656 [2024-07-22 18:34:44.496946] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:32.656 [2024-07-22 18:34:44.502133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.656 [2024-07-22 18:34:44.502189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.656 [2024-07-22 18:34:44.502205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.196 ms 00:27:32.656 [2024-07-22 18:34:44.502217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.657 [2024-07-22 18:34:44.502306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.657 [2024-07-22 18:34:44.502330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:32.657 [2024-07-22 18:34:44.502343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:32.657 [2024-07-22 18:34:44.502353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.657 [2024-07-22 18:34:44.502416] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:32.657 [2024-07-22 18:34:44.502465] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:32.657 [2024-07-22 18:34:44.502520] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:32.657 [2024-07-22 18:34:44.502549] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:32.657 [2024-07-22 18:34:44.502674] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:32.657 [2024-07-22 18:34:44.502724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:32.657 [2024-07-22 18:34:44.502741] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:32.657 [2024-07-22 18:34:44.502756] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:32.657 [2024-07-22 18:34:44.502771] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:32.657 [2024-07-22 18:34:44.502784] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:32.657 [2024-07-22 18:34:44.502808] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:32.657 [2024-07-22 18:34:44.502818] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:32.657 [2024-07-22 18:34:44.502829] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:32.657 [2024-07-22 18:34:44.502841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.657 [2024-07-22 18:34:44.502853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:32.657 [2024-07-22 18:34:44.502865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:27:32.657 [2024-07-22 18:34:44.502877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.657 [2024-07-22 18:34:44.502974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.657 [2024-07-22 18:34:44.502990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:32.657 [2024-07-22 18:34:44.503003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:32.657 [2024-07-22 18:34:44.503013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.657 [2024-07-22 18:34:44.503124] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:32.657 [2024-07-22 18:34:44.503151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:32.657 [2024-07-22 18:34:44.503164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:32.657 [2024-07-22 18:34:44.503198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:32.657 [2024-07-22 18:34:44.503229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.657 [2024-07-22 18:34:44.503250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:32.657 [2024-07-22 18:34:44.503260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:32.657 [2024-07-22 18:34:44.503270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.657 [2024-07-22 18:34:44.503282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:32.657 [2024-07-22 18:34:44.503292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:32.657 [2024-07-22 18:34:44.503303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:32.657 [2024-07-22 18:34:44.503338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:32.657 [2024-07-22 18:34:44.503371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:32.657 [2024-07-22 18:34:44.503416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:32.657 [2024-07-22 18:34:44.503447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:32.657 [2024-07-22 18:34:44.503478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:32.657 [2024-07-22 18:34:44.503509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.657 [2024-07-22 18:34:44.503538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:32.657 [2024-07-22 18:34:44.503549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:32.657 [2024-07-22 18:34:44.503559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.657 [2024-07-22 18:34:44.503570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:32.657 [2024-07-22 18:34:44.503580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:32.657 [2024-07-22 18:34:44.503591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:32.657 [2024-07-22 18:34:44.503612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:32.657 [2024-07-22 18:34:44.503623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503633] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:32.657 [2024-07-22 18:34:44.503644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:32.657 [2024-07-22 18:34:44.503655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.657 [2024-07-22 18:34:44.503691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:32.657 [2024-07-22 18:34:44.503706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:32.657 [2024-07-22 18:34:44.503717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:32.657 [2024-07-22 18:34:44.503728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:32.657 [2024-07-22 18:34:44.503738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:32.657 [2024-07-22 18:34:44.503751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:32.657 [2024-07-22 18:34:44.503764] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:32.657 [2024-07-22 18:34:44.503784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.657 [2024-07-22 18:34:44.503797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:32.657 [2024-07-22 18:34:44.503809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:32.657 [2024-07-22 18:34:44.503821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:32.657 [2024-07-22 18:34:44.503832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:32.657 [2024-07-22 18:34:44.503843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:32.657 [2024-07-22 18:34:44.503854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:32.657 [2024-07-22 18:34:44.503865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:32.657 [2024-07-22 18:34:44.503876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:32.657 [2024-07-22 18:34:44.503888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:32.657 [2024-07-22 18:34:44.503899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:32.657 [2024-07-22 18:34:44.503911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:32.657 [2024-07-22 18:34:44.503923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:32.657 [2024-07-22 18:34:44.503934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:32.658 [2024-07-22 18:34:44.503946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:32.658 [2024-07-22 18:34:44.503962] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:32.658 [2024-07-22 18:34:44.503978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.658 [2024-07-22 18:34:44.503991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.658 [2024-07-22 18:34:44.504003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:32.658 [2024-07-22 18:34:44.504014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:32.658 [2024-07-22 18:34:44.504026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:32.658 [2024-07-22 18:34:44.504037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.504049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:32.658 [2024-07-22 18:34:44.504076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:27:32.658 [2024-07-22 18:34:44.504087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.551023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.551103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.658 [2024-07-22 18:34:44.551124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.867 ms 00:27:32.658 [2024-07-22 18:34:44.551136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.551277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.551294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:32.658 [2024-07-22 18:34:44.551306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:27:32.658 [2024-07-22 18:34:44.551334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.592431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.592508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.658 [2024-07-22 18:34:44.592527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.971 ms 00:27:32.658 [2024-07-22 18:34:44.592538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.592634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.592678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.658 [2024-07-22 18:34:44.592715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:32.658 [2024-07-22 18:34:44.592728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.593392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.593440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.658 [2024-07-22 18:34:44.593455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:27:32.658 [2024-07-22 18:34:44.593480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.593680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.593716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.658 [2024-07-22 18:34:44.593755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:27:32.658 [2024-07-22 18:34:44.593768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.611159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.611224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.658 [2024-07-22 18:34:44.611242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.361 ms 00:27:32.658 [2024-07-22 18:34:44.611253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.627612] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:32.658 [2024-07-22 18:34:44.627706] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:32.658 [2024-07-22 18:34:44.627755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.627768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:32.658 [2024-07-22 18:34:44.627783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.327 ms 00:27:32.658 [2024-07-22 18:34:44.627793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.658 [2024-07-22 18:34:44.656962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.658 [2024-07-22 18:34:44.657046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:32.658 [2024-07-22 18:34:44.657065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.064 ms 00:27:32.658 [2024-07-22 18:34:44.657076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.916 [2024-07-22 18:34:44.672600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.916 [2024-07-22 18:34:44.672678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:32.916 [2024-07-22 18:34:44.672705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.465 ms 00:27:32.916 [2024-07-22 18:34:44.672719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.916 [2024-07-22 18:34:44.688856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.916 [2024-07-22 18:34:44.688913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:32.916 [2024-07-22 18:34:44.688932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.084 ms 00:27:32.916 [2024-07-22 18:34:44.688944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.916 [2024-07-22 18:34:44.689975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.916 [2024-07-22 18:34:44.690036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:32.916 [2024-07-22 18:34:44.690053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:27:32.916 [2024-07-22 18:34:44.690065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.916 [2024-07-22 18:34:44.769879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.916 [2024-07-22 18:34:44.769960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:32.916 [2024-07-22 18:34:44.769981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.783 ms 00:27:32.916 [2024-07-22 18:34:44.769995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.916 [2024-07-22 18:34:44.783471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:32.916 [2024-07-22 18:34:44.787706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.916 [2024-07-22 18:34:44.787769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:32.916 [2024-07-22 18:34:44.787786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.629 ms 00:27:32.916 [2024-07-22 18:34:44.787796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.916 [2024-07-22 18:34:44.787958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.916 [2024-07-22 18:34:44.787979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:32.917 [2024-07-22 18:34:44.787997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:32.917 [2024-07-22 18:34:44.788009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.917 [2024-07-22 18:34:44.788112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.917 [2024-07-22 18:34:44.788140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:32.917 [2024-07-22 18:34:44.788155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:32.917 [2024-07-22 18:34:44.788166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.917 [2024-07-22 18:34:44.788200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.917 [2024-07-22 18:34:44.788215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:32.917 [2024-07-22 18:34:44.788226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:32.917 [2024-07-22 18:34:44.788244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.917 [2024-07-22 18:34:44.788282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:32.917 [2024-07-22 18:34:44.788299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.917 [2024-07-22 18:34:44.788310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:32.917 [2024-07-22 18:34:44.788322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:32.917 [2024-07-22 18:34:44.788333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.917 [2024-07-22 18:34:44.821291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.917 [2024-07-22 18:34:44.821369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:32.917 [2024-07-22 18:34:44.821402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.926 ms 00:27:32.917 [2024-07-22 18:34:44.821414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.917 [2024-07-22 18:34:44.821568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.917 [2024-07-22 18:34:44.821600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:32.917 [2024-07-22 18:34:44.821613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:32.917 [2024-07-22 18:34:44.821638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.917 [2024-07-22 18:34:44.823171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.805 ms, result 0 00:28:13.497  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 125/1024 [MB] (25 MBps) Copying: 150/1024 [MB] (25 MBps) Copying: 176/1024 [MB] (25 MBps) Copying: 201/1024 [MB] (25 MBps) Copying: 225/1024 [MB] (23 MBps) Copying: 250/1024 [MB] (24 MBps) Copying: 274/1024 [MB] (24 MBps) Copying: 300/1024 [MB] (25 MBps) Copying: 323/1024 [MB] (23 MBps) Copying: 350/1024 [MB] (27 MBps) Copying: 377/1024 [MB] (26 MBps) Copying: 404/1024 [MB] (27 MBps) Copying: 431/1024 [MB] (27 MBps) Copying: 458/1024 [MB] (26 MBps) Copying: 485/1024 [MB] (27 MBps) Copying: 511/1024 [MB] (25 MBps) Copying: 537/1024 [MB] (26 MBps) Copying: 564/1024 [MB] (26 MBps) Copying: 590/1024 [MB] (26 MBps) Copying: 616/1024 [MB] (25 MBps) Copying: 643/1024 [MB] (27 MBps) Copying: 670/1024 [MB] (26 MBps) Copying: 696/1024 [MB] (26 MBps) Copying: 722/1024 [MB] (25 MBps) Copying: 748/1024 [MB] (26 MBps) Copying: 774/1024 [MB] (26 MBps) Copying: 800/1024 [MB] (26 MBps) Copying: 826/1024 [MB] (26 MBps) Copying: 854/1024 [MB] (27 MBps) Copying: 880/1024 [MB] (26 MBps) Copying: 907/1024 [MB] (26 MBps) Copying: 932/1024 [MB] (25 MBps) Copying: 958/1024 [MB] (26 MBps) Copying: 984/1024 [MB] (26 MBps) Copying: 1011/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (12 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-22 18:35:25.510187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.497 [2024-07-22 18:35:25.510409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:13.497 [2024-07-22 18:35:25.510538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:13.497 [2024-07-22 18:35:25.510590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.512906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:13.755 [2024-07-22 18:35:25.520232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.520395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:13.755 [2024-07-22 18:35:25.520521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.077 ms 00:28:13.755 [2024-07-22 18:35:25.520565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.532963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.533126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:13.755 [2024-07-22 18:35:25.533267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.173 ms 00:28:13.755 [2024-07-22 18:35:25.533317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.557875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.558051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:13.755 [2024-07-22 18:35:25.558173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.430 ms 00:28:13.755 [2024-07-22 18:35:25.558224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.564971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.565135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:13.755 [2024-07-22 18:35:25.565250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:28:13.755 [2024-07-22 18:35:25.565308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.596935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.597109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:13.755 [2024-07-22 18:35:25.597229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.521 ms 00:28:13.755 [2024-07-22 18:35:25.597251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.615008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.615064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:13.755 [2024-07-22 18:35:25.615104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.707 ms 00:28:13.755 [2024-07-22 18:35:25.615125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.720868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.720950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:13.755 [2024-07-22 18:35:25.720995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.650 ms 00:28:13.755 [2024-07-22 18:35:25.721017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.755 [2024-07-22 18:35:25.754488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.755 [2024-07-22 18:35:25.754571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:13.755 [2024-07-22 18:35:25.754604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.413 ms 00:28:13.755 [2024-07-22 18:35:25.754625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.015 [2024-07-22 18:35:25.786929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.015 [2024-07-22 18:35:25.786996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:14.015 [2024-07-22 18:35:25.787026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.152 ms 00:28:14.015 [2024-07-22 18:35:25.787046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.015 [2024-07-22 18:35:25.817518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.015 [2024-07-22 18:35:25.817568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:14.015 [2024-07-22 18:35:25.817596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.376 ms 00:28:14.015 [2024-07-22 18:35:25.817617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.015 [2024-07-22 18:35:25.848214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.015 [2024-07-22 18:35:25.848272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:14.015 [2024-07-22 18:35:25.848299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.416 ms 00:28:14.015 [2024-07-22 18:35:25.848320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.015 [2024-07-22 18:35:25.848415] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:14.015 [2024-07-22 18:35:25.848454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:28:14.015 [2024-07-22 18:35:25.848479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.848991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.849993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.850016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.850036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:14.015 [2024-07-22 18:35:25.850057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:14.016 [2024-07-22 18:35:25.850675] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:14.016 [2024-07-22 18:35:25.850717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 868c9ea1-f154-4d00-9b60-61655c3bc5e0 00:28:14.016 [2024-07-22 18:35:25.850739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:28:14.016 [2024-07-22 18:35:25.850760] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131264 00:28:14.016 [2024-07-22 18:35:25.850798] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:28:14.016 [2024-07-22 18:35:25.850828] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:28:14.016 [2024-07-22 18:35:25.850848] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:14.016 [2024-07-22 18:35:25.850869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:14.016 [2024-07-22 18:35:25.850887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:14.016 [2024-07-22 18:35:25.850900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:14.016 [2024-07-22 18:35:25.850912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:14.016 [2024-07-22 18:35:25.850932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.016 [2024-07-22 18:35:25.850961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:14.016 [2024-07-22 18:35:25.851004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:28:14.016 [2024-07-22 18:35:25.851032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.016 [2024-07-22 18:35:25.868133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.016 [2024-07-22 18:35:25.868223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:14.016 [2024-07-22 18:35:25.868252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.031 ms 00:28:14.016 [2024-07-22 18:35:25.868271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.016 [2024-07-22 18:35:25.868867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.016 [2024-07-22 18:35:25.868912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:14.016 [2024-07-22 18:35:25.868969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:28:14.016 [2024-07-22 18:35:25.868994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.016 [2024-07-22 18:35:25.908242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.016 [2024-07-22 18:35:25.908326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.016 [2024-07-22 18:35:25.908370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.016 [2024-07-22 18:35:25.908387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.016 [2024-07-22 18:35:25.908527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.016 [2024-07-22 18:35:25.908558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.016 [2024-07-22 18:35:25.908580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.016 [2024-07-22 18:35:25.908600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.016 [2024-07-22 18:35:25.908759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.016 [2024-07-22 18:35:25.908809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.016 [2024-07-22 18:35:25.908834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.016 [2024-07-22 18:35:25.908853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.016 [2024-07-22 18:35:25.908888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.016 [2024-07-22 18:35:25.908918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.016 [2024-07-22 18:35:25.908933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.016 [2024-07-22 18:35:25.908949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.016 [2024-07-22 18:35:26.015037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.016 [2024-07-22 18:35:26.015130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.016 [2024-07-22 18:35:26.015190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.016 [2024-07-22 18:35:26.015209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.101199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.275 [2024-07-22 18:35:26.101284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.275 [2024-07-22 18:35:26.101314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.275 [2024-07-22 18:35:26.101332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.101439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.275 [2024-07-22 18:35:26.101471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.275 [2024-07-22 18:35:26.101509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.275 [2024-07-22 18:35:26.101548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.101623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.275 [2024-07-22 18:35:26.101657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.275 [2024-07-22 18:35:26.101702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.275 [2024-07-22 18:35:26.101735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.101915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.275 [2024-07-22 18:35:26.101959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.275 [2024-07-22 18:35:26.101983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.275 [2024-07-22 18:35:26.102014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.102097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.275 [2024-07-22 18:35:26.102127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:14.275 [2024-07-22 18:35:26.102161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.275 [2024-07-22 18:35:26.102183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.102254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.275 [2024-07-22 18:35:26.102284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:14.275 [2024-07-22 18:35:26.102308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.275 [2024-07-22 18:35:26.102339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.102426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.275 [2024-07-22 18:35:26.102464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.275 [2024-07-22 18:35:26.102489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.275 [2024-07-22 18:35:26.102511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.275 [2024-07-22 18:35:26.102753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.468 ms, result 0 00:28:15.651 00:28:15.651 00:28:15.651 18:35:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:18.182 18:35:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:18.182 [2024-07-22 18:35:29.952400] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:18.182 [2024-07-22 18:35:29.952578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85084 ] 00:28:18.182 [2024-07-22 18:35:30.125135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.440 [2024-07-22 18:35:30.385037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.008 [2024-07-22 18:35:30.745970] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.008 [2024-07-22 18:35:30.746045] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.008 [2024-07-22 18:35:30.909138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.909200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:19.008 [2024-07-22 18:35:30.909220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:19.008 [2024-07-22 18:35:30.909233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.909306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.909326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.008 [2024-07-22 18:35:30.909340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:19.008 [2024-07-22 18:35:30.909356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.909387] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:19.008 [2024-07-22 18:35:30.910281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:19.008 [2024-07-22 18:35:30.910316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.910335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.008 [2024-07-22 18:35:30.910348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:28:19.008 [2024-07-22 18:35:30.910360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.912227] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:19.008 [2024-07-22 18:35:30.928791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.928833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:19.008 [2024-07-22 18:35:30.928851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.565 ms 00:28:19.008 [2024-07-22 18:35:30.928863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.928940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.928959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:19.008 [2024-07-22 18:35:30.928977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:19.008 [2024-07-22 18:35:30.928989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.937388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.937437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.008 [2024-07-22 18:35:30.937454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.306 ms 00:28:19.008 [2024-07-22 18:35:30.937466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.937577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.937599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.008 [2024-07-22 18:35:30.937612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:19.008 [2024-07-22 18:35:30.937623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.937716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.937736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:19.008 [2024-07-22 18:35:30.937748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:19.008 [2024-07-22 18:35:30.937760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.937797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.008 [2024-07-22 18:35:30.942794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.942831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.008 [2024-07-22 18:35:30.942846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.007 ms 00:28:19.008 [2024-07-22 18:35:30.942857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.942912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.942929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.008 [2024-07-22 18:35:30.942941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:19.008 [2024-07-22 18:35:30.942952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.943017] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:19.008 [2024-07-22 18:35:30.943050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:19.008 [2024-07-22 18:35:30.943094] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:19.008 [2024-07-22 18:35:30.943119] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:19.008 [2024-07-22 18:35:30.943223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.008 [2024-07-22 18:35:30.943248] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.008 [2024-07-22 18:35:30.943263] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:19.008 [2024-07-22 18:35:30.943278] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.008 [2024-07-22 18:35:30.943292] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.008 [2024-07-22 18:35:30.943304] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:19.008 [2024-07-22 18:35:30.943315] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.008 [2024-07-22 18:35:30.943326] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.008 [2024-07-22 18:35:30.943337] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.008 [2024-07-22 18:35:30.943348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.943365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.008 [2024-07-22 18:35:30.943377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:28:19.008 [2024-07-22 18:35:30.943400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.943494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.008 [2024-07-22 18:35:30.943509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.008 [2024-07-22 18:35:30.943521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:19.008 [2024-07-22 18:35:30.943532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.008 [2024-07-22 18:35:30.943635] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.008 [2024-07-22 18:35:30.943657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.008 [2024-07-22 18:35:30.943676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.008 [2024-07-22 18:35:30.943703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.008 [2024-07-22 18:35:30.943715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.008 [2024-07-22 18:35:30.943726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.008 [2024-07-22 18:35:30.943736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:19.008 [2024-07-22 18:35:30.943746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.008 [2024-07-22 18:35:30.943757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:19.008 [2024-07-22 18:35:30.943767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.008 [2024-07-22 18:35:30.943777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.008 [2024-07-22 18:35:30.943787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:19.008 [2024-07-22 18:35:30.943800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.008 [2024-07-22 18:35:30.943811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.008 [2024-07-22 18:35:30.943822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:19.009 [2024-07-22 18:35:30.943832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.009 [2024-07-22 18:35:30.943843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.009 [2024-07-22 18:35:30.943853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:19.009 [2024-07-22 18:35:30.943864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.009 [2024-07-22 18:35:30.943875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.009 [2024-07-22 18:35:30.943898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:19.009 [2024-07-22 18:35:30.943909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.009 [2024-07-22 18:35:30.943920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.009 [2024-07-22 18:35:30.943931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:19.009 [2024-07-22 18:35:30.943942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.009 [2024-07-22 18:35:30.943952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.009 [2024-07-22 18:35:30.943963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:19.009 [2024-07-22 18:35:30.943973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.009 [2024-07-22 18:35:30.943983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.009 [2024-07-22 18:35:30.943994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:19.009 [2024-07-22 18:35:30.944004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.009 [2024-07-22 18:35:30.944015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.009 [2024-07-22 18:35:30.944025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:19.009 [2024-07-22 18:35:30.944035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.009 [2024-07-22 18:35:30.944046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.009 [2024-07-22 18:35:30.944056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:19.009 [2024-07-22 18:35:30.944067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.009 [2024-07-22 18:35:30.944078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.009 [2024-07-22 18:35:30.944088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:19.009 [2024-07-22 18:35:30.944098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.009 [2024-07-22 18:35:30.944109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.009 [2024-07-22 18:35:30.944119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:19.009 [2024-07-22 18:35:30.944130] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.009 [2024-07-22 18:35:30.944140] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.009 [2024-07-22 18:35:30.944153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.009 [2024-07-22 18:35:30.944165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.009 [2024-07-22 18:35:30.944176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.009 [2024-07-22 18:35:30.944188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.009 [2024-07-22 18:35:30.944199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.009 [2024-07-22 18:35:30.944209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.009 [2024-07-22 18:35:30.944220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.009 [2024-07-22 18:35:30.944231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.009 [2024-07-22 18:35:30.944241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.009 [2024-07-22 18:35:30.944253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.009 [2024-07-22 18:35:30.944267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.009 [2024-07-22 18:35:30.944280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:19.009 [2024-07-22 18:35:30.944292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:19.009 [2024-07-22 18:35:30.944304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:19.009 [2024-07-22 18:35:30.944315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:19.009 [2024-07-22 18:35:30.944327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:19.009 [2024-07-22 18:35:30.944338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:19.009 [2024-07-22 18:35:30.944350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:19.009 [2024-07-22 18:35:30.944361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:19.009 [2024-07-22 18:35:30.944373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:19.009 [2024-07-22 18:35:30.944388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:19.009 [2024-07-22 18:35:30.944408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:19.009 [2024-07-22 18:35:30.944427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:19.009 [2024-07-22 18:35:30.944443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:19.009 [2024-07-22 18:35:30.944455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:19.009 [2024-07-22 18:35:30.944466] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.009 [2024-07-22 18:35:30.944479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.009 [2024-07-22 18:35:30.944491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.009 [2024-07-22 18:35:30.944503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.009 [2024-07-22 18:35:30.944515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.009 [2024-07-22 18:35:30.944526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.009 [2024-07-22 18:35:30.944539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.009 [2024-07-22 18:35:30.944559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.009 [2024-07-22 18:35:30.944572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:28:19.009 [2024-07-22 18:35:30.944583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.009 [2024-07-22 18:35:30.992449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.009 [2024-07-22 18:35:30.992537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.009 [2024-07-22 18:35:30.992558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.796 ms 00:28:19.009 [2024-07-22 18:35:30.992570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.009 [2024-07-22 18:35:30.992704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.009 [2024-07-22 18:35:30.992722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:19.009 [2024-07-22 18:35:30.992735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:19.009 [2024-07-22 18:35:30.992746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.036165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.036229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.268 [2024-07-22 18:35:31.036249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.321 ms 00:28:19.268 [2024-07-22 18:35:31.036261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.036333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.036351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.268 [2024-07-22 18:35:31.036363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.268 [2024-07-22 18:35:31.036375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.037023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.037050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.268 [2024-07-22 18:35:31.037064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:28:19.268 [2024-07-22 18:35:31.037075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.037245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.037264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.268 [2024-07-22 18:35:31.037276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:28:19.268 [2024-07-22 18:35:31.037288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.056338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.056404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.268 [2024-07-22 18:35:31.056424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.021 ms 00:28:19.268 [2024-07-22 18:35:31.056437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.074610] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:19.268 [2024-07-22 18:35:31.074693] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:19.268 [2024-07-22 18:35:31.074718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.074732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:19.268 [2024-07-22 18:35:31.074747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.093 ms 00:28:19.268 [2024-07-22 18:35:31.074758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.106913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.106976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:19.268 [2024-07-22 18:35:31.107005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.028 ms 00:28:19.268 [2024-07-22 18:35:31.107018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.123022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.123064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:19.268 [2024-07-22 18:35:31.123080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.918 ms 00:28:19.268 [2024-07-22 18:35:31.123092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.139047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.139087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:19.268 [2024-07-22 18:35:31.139103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.910 ms 00:28:19.268 [2024-07-22 18:35:31.139114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.140063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.140098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:19.268 [2024-07-22 18:35:31.140114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:28:19.268 [2024-07-22 18:35:31.140125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.217223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.217299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:19.268 [2024-07-22 18:35:31.217321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.070 ms 00:28:19.268 [2024-07-22 18:35:31.217333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.229923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:19.268 [2024-07-22 18:35:31.233899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.233936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:19.268 [2024-07-22 18:35:31.233952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.483 ms 00:28:19.268 [2024-07-22 18:35:31.233964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.234079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.234099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:19.268 [2024-07-22 18:35:31.234112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:19.268 [2024-07-22 18:35:31.234123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.236113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.236153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:19.268 [2024-07-22 18:35:31.236167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:28:19.268 [2024-07-22 18:35:31.236178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.236215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.236230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:19.268 [2024-07-22 18:35:31.236243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:19.268 [2024-07-22 18:35:31.236254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.236294] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:19.268 [2024-07-22 18:35:31.236310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.236321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:19.268 [2024-07-22 18:35:31.236338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:19.268 [2024-07-22 18:35:31.236349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.267406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.267461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:19.268 [2024-07-22 18:35:31.267484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.032 ms 00:28:19.268 [2024-07-22 18:35:31.267496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.267581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.268 [2024-07-22 18:35:31.267622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:19.268 [2024-07-22 18:35:31.267635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:19.268 [2024-07-22 18:35:31.267646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.268 [2024-07-22 18:35:31.274798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 363.972 ms, result 0 00:28:56.377  Copying: 900/1048576 [kB] (900 kBps) Copying: 4076/1048576 [kB] (3176 kBps) Copying: 23/1024 [MB] (19 MBps) Copying: 51/1024 [MB] (28 MBps) Copying: 80/1024 [MB] (28 MBps) Copying: 110/1024 [MB] (30 MBps) Copying: 138/1024 [MB] (27 MBps) Copying: 166/1024 [MB] (28 MBps) Copying: 195/1024 [MB] (28 MBps) Copying: 226/1024 [MB] (30 MBps) Copying: 254/1024 [MB] (28 MBps) Copying: 283/1024 [MB] (28 MBps) Copying: 313/1024 [MB] (30 MBps) Copying: 344/1024 [MB] (30 MBps) Copying: 374/1024 [MB] (30 MBps) Copying: 404/1024 [MB] (30 MBps) Copying: 434/1024 [MB] (30 MBps) Copying: 465/1024 [MB] (30 MBps) Copying: 495/1024 [MB] (30 MBps) Copying: 525/1024 [MB] (30 MBps) Copying: 556/1024 [MB] (30 MBps) Copying: 587/1024 [MB] (30 MBps) Copying: 617/1024 [MB] (30 MBps) Copying: 646/1024 [MB] (29 MBps) Copying: 677/1024 [MB] (30 MBps) Copying: 708/1024 [MB] (30 MBps) Copying: 739/1024 [MB] (30 MBps) Copying: 769/1024 [MB] (30 MBps) Copying: 800/1024 [MB] (31 MBps) Copying: 831/1024 [MB] (30 MBps) Copying: 862/1024 [MB] (31 MBps) Copying: 892/1024 [MB] (30 MBps) Copying: 922/1024 [MB] (30 MBps) Copying: 953/1024 [MB] (30 MBps) Copying: 981/1024 [MB] (28 MBps) Copying: 1012/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-22 18:36:08.339198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.377 [2024-07-22 18:36:08.339488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:56.377 [2024-07-22 18:36:08.339526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:56.377 [2024-07-22 18:36:08.339554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.377 [2024-07-22 18:36:08.339601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:56.377 [2024-07-22 18:36:08.345521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.377 [2024-07-22 18:36:08.345573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:56.377 [2024-07-22 18:36:08.345597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.890 ms 00:28:56.377 [2024-07-22 18:36:08.345617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.377 [2024-07-22 18:36:08.345991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.377 [2024-07-22 18:36:08.346019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:56.377 [2024-07-22 18:36:08.346041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:28:56.377 [2024-07-22 18:36:08.346060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.377 [2024-07-22 18:36:08.360196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.377 [2024-07-22 18:36:08.360271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:56.377 [2024-07-22 18:36:08.360294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.091 ms 00:28:56.377 [2024-07-22 18:36:08.360310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.377 [2024-07-22 18:36:08.368633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.377 [2024-07-22 18:36:08.368670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:56.377 [2024-07-22 18:36:08.368698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.273 ms 00:28:56.377 [2024-07-22 18:36:08.368714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.407975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.637 [2024-07-22 18:36:08.408030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:56.637 [2024-07-22 18:36:08.408050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.189 ms 00:28:56.637 [2024-07-22 18:36:08.408065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.429413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.637 [2024-07-22 18:36:08.429488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:56.637 [2024-07-22 18:36:08.429511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.292 ms 00:28:56.637 [2024-07-22 18:36:08.429526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.433074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.637 [2024-07-22 18:36:08.433121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:56.637 [2024-07-22 18:36:08.433140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.487 ms 00:28:56.637 [2024-07-22 18:36:08.433156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.471335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.637 [2024-07-22 18:36:08.471382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:56.637 [2024-07-22 18:36:08.471410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.153 ms 00:28:56.637 [2024-07-22 18:36:08.471424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.510655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.637 [2024-07-22 18:36:08.510725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:56.637 [2024-07-22 18:36:08.510757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.176 ms 00:28:56.637 [2024-07-22 18:36:08.510771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.548043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.637 [2024-07-22 18:36:08.548094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:56.637 [2024-07-22 18:36:08.548114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.214 ms 00:28:56.637 [2024-07-22 18:36:08.548147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.586353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.637 [2024-07-22 18:36:08.586426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:56.637 [2024-07-22 18:36:08.586448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.092 ms 00:28:56.637 [2024-07-22 18:36:08.586462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.637 [2024-07-22 18:36:08.586542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:56.637 [2024-07-22 18:36:08.586587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:56.637 [2024-07-22 18:36:08.586606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:28:56.637 [2024-07-22 18:36:08.586622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:56.637 [2024-07-22 18:36:08.586953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.586967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.586982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.586999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.587987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:56.638 [2024-07-22 18:36:08.588147] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:56.638 [2024-07-22 18:36:08.588161] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 868c9ea1-f154-4d00-9b60-61655c3bc5e0 00:28:56.638 [2024-07-22 18:36:08.588176] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:28:56.638 [2024-07-22 18:36:08.588190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136128 00:28:56.638 [2024-07-22 18:36:08.588204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134144 00:28:56.638 [2024-07-22 18:36:08.588228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:28:56.638 [2024-07-22 18:36:08.588242] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:56.638 [2024-07-22 18:36:08.588262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:56.638 [2024-07-22 18:36:08.588275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:56.638 [2024-07-22 18:36:08.588288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:56.638 [2024-07-22 18:36:08.588301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:56.638 [2024-07-22 18:36:08.588315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.638 [2024-07-22 18:36:08.588330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:56.638 [2024-07-22 18:36:08.588354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:28:56.638 [2024-07-22 18:36:08.588378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.639 [2024-07-22 18:36:08.610112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.639 [2024-07-22 18:36:08.610161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:56.639 [2024-07-22 18:36:08.610183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.668 ms 00:28:56.639 [2024-07-22 18:36:08.610220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.639 [2024-07-22 18:36:08.610837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.639 [2024-07-22 18:36:08.610866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:56.639 [2024-07-22 18:36:08.610883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:28:56.639 [2024-07-22 18:36:08.610898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.657907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.657958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.898 [2024-07-22 18:36:08.657981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.657994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.658074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.658089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.898 [2024-07-22 18:36:08.658100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.658111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.658203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.658222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.898 [2024-07-22 18:36:08.658235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.658253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.658277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.658291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.898 [2024-07-22 18:36:08.658304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.658316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.763450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.763516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.898 [2024-07-22 18:36:08.763542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.763555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.849519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.849585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.898 [2024-07-22 18:36:08.849604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.849616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.849717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.849738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.898 [2024-07-22 18:36:08.849751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.849762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.849818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.849832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.898 [2024-07-22 18:36:08.849844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.849869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.849992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.850020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.898 [2024-07-22 18:36:08.850033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.850044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.850103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.850130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:56.898 [2024-07-22 18:36:08.850142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.850153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.850199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.850215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.898 [2024-07-22 18:36:08.850227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.850238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.850294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.898 [2024-07-22 18:36:08.850311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.898 [2024-07-22 18:36:08.850323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.898 [2024-07-22 18:36:08.850335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.898 [2024-07-22 18:36:08.850488] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 511.262 ms, result 0 00:28:58.275 00:28:58.276 00:28:58.276 18:36:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:00.806 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:00.806 18:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:00.806 [2024-07-22 18:36:12.308665] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:00.806 [2024-07-22 18:36:12.308857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85511 ] 00:29:00.806 [2024-07-22 18:36:12.474974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.806 [2024-07-22 18:36:12.717648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.374 [2024-07-22 18:36:13.096158] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:01.374 [2024-07-22 18:36:13.096244] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:01.374 [2024-07-22 18:36:13.268161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.268228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:01.374 [2024-07-22 18:36:13.268249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:01.374 [2024-07-22 18:36:13.268261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.268338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.268359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:01.374 [2024-07-22 18:36:13.268373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:01.374 [2024-07-22 18:36:13.268388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.268420] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:01.374 [2024-07-22 18:36:13.269352] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:01.374 [2024-07-22 18:36:13.269384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.269402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:01.374 [2024-07-22 18:36:13.269415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:29:01.374 [2024-07-22 18:36:13.269426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.271374] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:01.374 [2024-07-22 18:36:13.288030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.288079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:01.374 [2024-07-22 18:36:13.288096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.657 ms 00:29:01.374 [2024-07-22 18:36:13.288108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.288182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.288202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:01.374 [2024-07-22 18:36:13.288219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:01.374 [2024-07-22 18:36:13.288231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.296993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.297036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:01.374 [2024-07-22 18:36:13.297052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.668 ms 00:29:01.374 [2024-07-22 18:36:13.297063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.297170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.297200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:01.374 [2024-07-22 18:36:13.297213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:01.374 [2024-07-22 18:36:13.297239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.297302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.297335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:01.374 [2024-07-22 18:36:13.297357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:01.374 [2024-07-22 18:36:13.297367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.297412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:01.374 [2024-07-22 18:36:13.302541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.302574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:01.374 [2024-07-22 18:36:13.302589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.139 ms 00:29:01.374 [2024-07-22 18:36:13.302600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.302653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.374 [2024-07-22 18:36:13.302669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:01.374 [2024-07-22 18:36:13.302695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:01.374 [2024-07-22 18:36:13.302708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.374 [2024-07-22 18:36:13.302775] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:01.374 [2024-07-22 18:36:13.302807] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:01.374 [2024-07-22 18:36:13.302859] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:01.374 [2024-07-22 18:36:13.302885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:01.374 [2024-07-22 18:36:13.302991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:01.374 [2024-07-22 18:36:13.303008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:01.374 [2024-07-22 18:36:13.303022] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:01.374 [2024-07-22 18:36:13.303037] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:01.374 [2024-07-22 18:36:13.303050] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:01.374 [2024-07-22 18:36:13.303062] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:01.375 [2024-07-22 18:36:13.303073] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:01.375 [2024-07-22 18:36:13.303084] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:01.375 [2024-07-22 18:36:13.303095] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:01.375 [2024-07-22 18:36:13.303106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.375 [2024-07-22 18:36:13.303122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:01.375 [2024-07-22 18:36:13.303134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:29:01.375 [2024-07-22 18:36:13.303145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.375 [2024-07-22 18:36:13.303241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.375 [2024-07-22 18:36:13.303255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:01.375 [2024-07-22 18:36:13.303267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:01.375 [2024-07-22 18:36:13.303278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.375 [2024-07-22 18:36:13.303384] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:01.375 [2024-07-22 18:36:13.303442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:01.375 [2024-07-22 18:36:13.303461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:01.375 [2024-07-22 18:36:13.303496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:01.375 [2024-07-22 18:36:13.303528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:01.375 [2024-07-22 18:36:13.303551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:01.375 [2024-07-22 18:36:13.303561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:01.375 [2024-07-22 18:36:13.303571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:01.375 [2024-07-22 18:36:13.303581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:01.375 [2024-07-22 18:36:13.303592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:01.375 [2024-07-22 18:36:13.303602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:01.375 [2024-07-22 18:36:13.303623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:01.375 [2024-07-22 18:36:13.303667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:01.375 [2024-07-22 18:36:13.303715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:01.375 [2024-07-22 18:36:13.303747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:01.375 [2024-07-22 18:36:13.303778] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:01.375 [2024-07-22 18:36:13.303817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:01.375 [2024-07-22 18:36:13.303838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:01.375 [2024-07-22 18:36:13.303849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:01.375 [2024-07-22 18:36:13.303860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:01.375 [2024-07-22 18:36:13.303872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:01.375 [2024-07-22 18:36:13.303883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:01.375 [2024-07-22 18:36:13.303899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:01.375 [2024-07-22 18:36:13.303919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:01.375 [2024-07-22 18:36:13.303930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303945] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:01.375 [2024-07-22 18:36:13.303957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:01.375 [2024-07-22 18:36:13.303968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:01.375 [2024-07-22 18:36:13.303979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.375 [2024-07-22 18:36:13.303990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:01.375 [2024-07-22 18:36:13.304001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:01.375 [2024-07-22 18:36:13.304011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:01.375 [2024-07-22 18:36:13.304023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:01.375 [2024-07-22 18:36:13.304035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:01.375 [2024-07-22 18:36:13.304046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:01.375 [2024-07-22 18:36:13.304058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:01.375 [2024-07-22 18:36:13.304072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.375 [2024-07-22 18:36:13.304085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:01.375 [2024-07-22 18:36:13.304097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:01.375 [2024-07-22 18:36:13.304109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:01.375 [2024-07-22 18:36:13.304120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:01.375 [2024-07-22 18:36:13.304131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:01.375 [2024-07-22 18:36:13.304143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:01.375 [2024-07-22 18:36:13.304154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:01.375 [2024-07-22 18:36:13.304165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:01.375 [2024-07-22 18:36:13.304176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:01.375 [2024-07-22 18:36:13.304188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:01.375 [2024-07-22 18:36:13.304199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:01.375 [2024-07-22 18:36:13.304211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:01.375 [2024-07-22 18:36:13.304234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:01.375 [2024-07-22 18:36:13.304247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:01.375 [2024-07-22 18:36:13.304259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:01.375 [2024-07-22 18:36:13.304272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.375 [2024-07-22 18:36:13.304286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:01.375 [2024-07-22 18:36:13.304297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:01.375 [2024-07-22 18:36:13.304309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:01.375 [2024-07-22 18:36:13.304321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:01.375 [2024-07-22 18:36:13.304334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.375 [2024-07-22 18:36:13.304351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:01.375 [2024-07-22 18:36:13.304364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:29:01.375 [2024-07-22 18:36:13.304375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.375 [2024-07-22 18:36:13.354383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.375 [2024-07-22 18:36:13.354710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:01.375 [2024-07-22 18:36:13.354885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.938 ms 00:29:01.376 [2024-07-22 18:36:13.354938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.376 [2024-07-22 18:36:13.355163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.376 [2024-07-22 18:36:13.355281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:01.376 [2024-07-22 18:36:13.355383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:01.376 [2024-07-22 18:36:13.355456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.400556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.400881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:01.635 [2024-07-22 18:36:13.401027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.900 ms 00:29:01.635 [2024-07-22 18:36:13.401050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.401125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.401142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:01.635 [2024-07-22 18:36:13.401156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:01.635 [2024-07-22 18:36:13.401167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.401794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.401818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:01.635 [2024-07-22 18:36:13.401833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:29:01.635 [2024-07-22 18:36:13.401844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.402013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.402031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:01.635 [2024-07-22 18:36:13.402043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:29:01.635 [2024-07-22 18:36:13.402054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.421560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.421603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:01.635 [2024-07-22 18:36:13.421621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.478 ms 00:29:01.635 [2024-07-22 18:36:13.421633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.439334] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:01.635 [2024-07-22 18:36:13.439379] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:01.635 [2024-07-22 18:36:13.439446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.439460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:01.635 [2024-07-22 18:36:13.439473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.644 ms 00:29:01.635 [2024-07-22 18:36:13.439484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.470746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.470801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:01.635 [2024-07-22 18:36:13.470831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.208 ms 00:29:01.635 [2024-07-22 18:36:13.470843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.488208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.488284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:01.635 [2024-07-22 18:36:13.488320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.298 ms 00:29:01.635 [2024-07-22 18:36:13.488332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.505117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.505154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:01.635 [2024-07-22 18:36:13.505186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.698 ms 00:29:01.635 [2024-07-22 18:36:13.505197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.506199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.506236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:01.635 [2024-07-22 18:36:13.506253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:29:01.635 [2024-07-22 18:36:13.506264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.587964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.588066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:01.635 [2024-07-22 18:36:13.588098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.658 ms 00:29:01.635 [2024-07-22 18:36:13.588117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.603080] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:01.635 [2024-07-22 18:36:13.607601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.607643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:01.635 [2024-07-22 18:36:13.607674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.374 ms 00:29:01.635 [2024-07-22 18:36:13.607703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.635 [2024-07-22 18:36:13.607868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.635 [2024-07-22 18:36:13.607893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:01.636 [2024-07-22 18:36:13.607907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:01.636 [2024-07-22 18:36:13.607919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.636 [2024-07-22 18:36:13.608965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.636 [2024-07-22 18:36:13.609003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:01.636 [2024-07-22 18:36:13.609019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:29:01.636 [2024-07-22 18:36:13.609030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.636 [2024-07-22 18:36:13.609067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.636 [2024-07-22 18:36:13.609083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:01.636 [2024-07-22 18:36:13.609096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:01.636 [2024-07-22 18:36:13.609107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.636 [2024-07-22 18:36:13.609149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:01.636 [2024-07-22 18:36:13.609167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.636 [2024-07-22 18:36:13.609178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:01.636 [2024-07-22 18:36:13.609196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:01.636 [2024-07-22 18:36:13.609207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.636 [2024-07-22 18:36:13.641420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.636 [2024-07-22 18:36:13.641484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:01.636 [2024-07-22 18:36:13.641503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.187 ms 00:29:01.636 [2024-07-22 18:36:13.641516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.636 [2024-07-22 18:36:13.641601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.636 [2024-07-22 18:36:13.641629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:01.636 [2024-07-22 18:36:13.641642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:01.636 [2024-07-22 18:36:13.641654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.636 [2024-07-22 18:36:13.642993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.295 ms, result 0 00:29:40.801  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (26 MBps) Copying: 80/1024 [MB] (26 MBps) Copying: 108/1024 [MB] (27 MBps) Copying: 135/1024 [MB] (27 MBps) Copying: 160/1024 [MB] (25 MBps) Copying: 186/1024 [MB] (25 MBps) Copying: 214/1024 [MB] (27 MBps) Copying: 241/1024 [MB] (27 MBps) Copying: 268/1024 [MB] (27 MBps) Copying: 296/1024 [MB] (27 MBps) Copying: 323/1024 [MB] (27 MBps) Copying: 350/1024 [MB] (27 MBps) Copying: 378/1024 [MB] (27 MBps) Copying: 405/1024 [MB] (27 MBps) Copying: 432/1024 [MB] (27 MBps) Copying: 458/1024 [MB] (25 MBps) Copying: 482/1024 [MB] (24 MBps) Copying: 508/1024 [MB] (26 MBps) Copying: 535/1024 [MB] (26 MBps) Copying: 561/1024 [MB] (26 MBps) Copying: 587/1024 [MB] (25 MBps) Copying: 613/1024 [MB] (26 MBps) Copying: 638/1024 [MB] (25 MBps) Copying: 664/1024 [MB] (25 MBps) Copying: 689/1024 [MB] (24 MBps) Copying: 714/1024 [MB] (25 MBps) Copying: 738/1024 [MB] (24 MBps) Copying: 763/1024 [MB] (24 MBps) Copying: 789/1024 [MB] (25 MBps) Copying: 817/1024 [MB] (28 MBps) Copying: 845/1024 [MB] (27 MBps) Copying: 873/1024 [MB] (27 MBps) Copying: 900/1024 [MB] (26 MBps) Copying: 928/1024 [MB] (28 MBps) Copying: 955/1024 [MB] (26 MBps) Copying: 982/1024 [MB] (27 MBps) Copying: 1007/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-22 18:36:52.661001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.661093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:40.802 [2024-07-22 18:36:52.661117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:40.802 [2024-07-22 18:36:52.661130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.661162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:40.802 [2024-07-22 18:36:52.665237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.665275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:40.802 [2024-07-22 18:36:52.665292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.050 ms 00:29:40.802 [2024-07-22 18:36:52.665304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.665556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.665574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:40.802 [2024-07-22 18:36:52.665588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:29:40.802 [2024-07-22 18:36:52.665599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.669558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.669594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:40.802 [2024-07-22 18:36:52.669610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:29:40.802 [2024-07-22 18:36:52.669621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.676100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.676141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:40.802 [2024-07-22 18:36:52.676157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.454 ms 00:29:40.802 [2024-07-22 18:36:52.676168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.709174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.709244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:40.802 [2024-07-22 18:36:52.709266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.926 ms 00:29:40.802 [2024-07-22 18:36:52.709278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.727480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.727544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:40.802 [2024-07-22 18:36:52.727563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.163 ms 00:29:40.802 [2024-07-22 18:36:52.727576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.730844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.730907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:40.802 [2024-07-22 18:36:52.730926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:29:40.802 [2024-07-22 18:36:52.730948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.763257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.763325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:40.802 [2024-07-22 18:36:52.763346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.283 ms 00:29:40.802 [2024-07-22 18:36:52.763359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.802 [2024-07-22 18:36:52.795410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.802 [2024-07-22 18:36:52.795494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:40.802 [2024-07-22 18:36:52.795515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.004 ms 00:29:40.802 [2024-07-22 18:36:52.795526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.062 [2024-07-22 18:36:52.826246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.062 [2024-07-22 18:36:52.826319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:41.062 [2024-07-22 18:36:52.826361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.670 ms 00:29:41.062 [2024-07-22 18:36:52.826374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.062 [2024-07-22 18:36:52.857579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.062 [2024-07-22 18:36:52.857654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:41.062 [2024-07-22 18:36:52.857674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.091 ms 00:29:41.062 [2024-07-22 18:36:52.857703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.062 [2024-07-22 18:36:52.857755] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:41.062 [2024-07-22 18:36:52.857779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:41.062 [2024-07-22 18:36:52.857795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:29:41.062 [2024-07-22 18:36:52.857809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.857987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:41.062 [2024-07-22 18:36:52.858332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.858999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.859011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.859024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.859037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.859049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.859061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:41.063 [2024-07-22 18:36:52.859084] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:41.063 [2024-07-22 18:36:52.859096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 868c9ea1-f154-4d00-9b60-61655c3bc5e0 00:29:41.063 [2024-07-22 18:36:52.859108] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:29:41.063 [2024-07-22 18:36:52.859119] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:41.063 [2024-07-22 18:36:52.859141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:41.063 [2024-07-22 18:36:52.859153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:41.063 [2024-07-22 18:36:52.859163] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:41.063 [2024-07-22 18:36:52.859175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:41.063 [2024-07-22 18:36:52.859186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:41.063 [2024-07-22 18:36:52.859196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:41.063 [2024-07-22 18:36:52.859206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:41.063 [2024-07-22 18:36:52.859218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.063 [2024-07-22 18:36:52.859229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:41.063 [2024-07-22 18:36:52.859242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:29:41.063 [2024-07-22 18:36:52.859253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.063 [2024-07-22 18:36:52.877062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.063 [2024-07-22 18:36:52.877127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:41.063 [2024-07-22 18:36:52.877162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.741 ms 00:29:41.063 [2024-07-22 18:36:52.877174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.063 [2024-07-22 18:36:52.877707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.063 [2024-07-22 18:36:52.877733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:41.063 [2024-07-22 18:36:52.877748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:29:41.063 [2024-07-22 18:36:52.877759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.063 [2024-07-22 18:36:52.916048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.063 [2024-07-22 18:36:52.916118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:41.063 [2024-07-22 18:36:52.916139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.063 [2024-07-22 18:36:52.916151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.063 [2024-07-22 18:36:52.916250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.063 [2024-07-22 18:36:52.916265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:41.063 [2024-07-22 18:36:52.916278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.063 [2024-07-22 18:36:52.916290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.063 [2024-07-22 18:36:52.916391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.063 [2024-07-22 18:36:52.916411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:41.063 [2024-07-22 18:36:52.916424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.064 [2024-07-22 18:36:52.916435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.064 [2024-07-22 18:36:52.916458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.064 [2024-07-22 18:36:52.916471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:41.064 [2024-07-22 18:36:52.916483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.064 [2024-07-22 18:36:52.916498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.064 [2024-07-22 18:36:53.023708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.064 [2024-07-22 18:36:53.023774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:41.064 [2024-07-22 18:36:53.023793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.064 [2024-07-22 18:36:53.023805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.322 [2024-07-22 18:36:53.117046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.322 [2024-07-22 18:36:53.117117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:41.322 [2024-07-22 18:36:53.117139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.322 [2024-07-22 18:36:53.117152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.322 [2024-07-22 18:36:53.117239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.322 [2024-07-22 18:36:53.117257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:41.322 [2024-07-22 18:36:53.117270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.322 [2024-07-22 18:36:53.117281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.322 [2024-07-22 18:36:53.117327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.322 [2024-07-22 18:36:53.117341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:41.322 [2024-07-22 18:36:53.117353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.322 [2024-07-22 18:36:53.117364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.322 [2024-07-22 18:36:53.117501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.322 [2024-07-22 18:36:53.117527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:41.322 [2024-07-22 18:36:53.117540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.322 [2024-07-22 18:36:53.117552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.322 [2024-07-22 18:36:53.117601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.323 [2024-07-22 18:36:53.117619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:41.323 [2024-07-22 18:36:53.117632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.323 [2024-07-22 18:36:53.117644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.323 [2024-07-22 18:36:53.117713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.323 [2024-07-22 18:36:53.117731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:41.323 [2024-07-22 18:36:53.117751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.323 [2024-07-22 18:36:53.117763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.323 [2024-07-22 18:36:53.117816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.323 [2024-07-22 18:36:53.117834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:41.323 [2024-07-22 18:36:53.117846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.323 [2024-07-22 18:36:53.117858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.323 [2024-07-22 18:36:53.118005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.969 ms, result 0 00:29:42.262 00:29:42.262 00:29:42.262 18:36:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:44.811 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:44.811 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:44.811 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:44.811 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:44.811 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:44.811 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:45.070 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:45.070 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:45.070 Process with pid 83580 is not found 00:29:45.070 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83580 00:29:45.070 18:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83580 ']' 00:29:45.070 18:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83580 00:29:45.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83580) - No such process 00:29:45.070 18:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83580 is not found' 00:29:45.070 18:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:45.070 Remove shared memory files 00:29:45.070 18:36:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:45.070 18:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:45.070 18:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:45.070 18:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:45.070 18:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:45.070 18:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:45.329 18:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:45.329 ************************************ 00:29:45.329 END TEST ftl_dirty_shutdown 00:29:45.329 ************************************ 00:29:45.329 00:29:45.329 real 3m50.910s 00:29:45.329 user 4m26.094s 00:29:45.329 sys 0m39.213s 00:29:45.329 18:36:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.329 18:36:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:45.329 18:36:57 ftl -- common/autotest_common.sh@1142 -- # return 0 00:29:45.329 18:36:57 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:45.329 18:36:57 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:45.329 18:36:57 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.329 18:36:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:45.329 ************************************ 00:29:45.329 START TEST ftl_upgrade_shutdown 00:29:45.329 ************************************ 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:45.329 * Looking for test storage... 00:29:45.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:45.329 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86009 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86009 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86009 ']' 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:45.330 18:36:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:45.587 [2024-07-22 18:36:57.382959] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:45.587 [2024-07-22 18:36:57.383416] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86009 ] 00:29:45.587 [2024-07-22 18:36:57.582944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.845 [2024-07-22 18:36:57.835167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:46.780 18:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:47.039 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:47.297 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:47.297 { 00:29:47.297 "name": "basen1", 00:29:47.297 "aliases": [ 00:29:47.297 "d146302f-c4b6-4736-8ca6-15204a18aaf7" 00:29:47.297 ], 00:29:47.297 "product_name": "NVMe disk", 00:29:47.297 "block_size": 4096, 00:29:47.297 "num_blocks": 1310720, 00:29:47.297 "uuid": "d146302f-c4b6-4736-8ca6-15204a18aaf7", 00:29:47.297 "assigned_rate_limits": { 00:29:47.297 "rw_ios_per_sec": 0, 00:29:47.297 "rw_mbytes_per_sec": 0, 00:29:47.297 "r_mbytes_per_sec": 0, 00:29:47.297 "w_mbytes_per_sec": 0 00:29:47.297 }, 00:29:47.297 "claimed": true, 00:29:47.297 "claim_type": "read_many_write_one", 00:29:47.297 "zoned": false, 00:29:47.297 "supported_io_types": { 00:29:47.297 "read": true, 00:29:47.297 "write": true, 00:29:47.297 "unmap": true, 00:29:47.297 "flush": true, 00:29:47.297 "reset": true, 00:29:47.297 "nvme_admin": true, 00:29:47.297 "nvme_io": true, 00:29:47.297 "nvme_io_md": false, 00:29:47.297 "write_zeroes": true, 00:29:47.297 "zcopy": false, 00:29:47.297 "get_zone_info": false, 00:29:47.297 "zone_management": false, 00:29:47.297 "zone_append": false, 00:29:47.297 "compare": true, 00:29:47.297 "compare_and_write": false, 00:29:47.297 "abort": true, 00:29:47.297 "seek_hole": false, 00:29:47.297 "seek_data": false, 00:29:47.297 "copy": true, 00:29:47.297 "nvme_iov_md": false 00:29:47.297 }, 00:29:47.297 "driver_specific": { 00:29:47.297 "nvme": [ 00:29:47.297 { 00:29:47.297 "pci_address": "0000:00:11.0", 00:29:47.297 "trid": { 00:29:47.297 "trtype": "PCIe", 00:29:47.297 "traddr": "0000:00:11.0" 00:29:47.297 }, 00:29:47.297 "ctrlr_data": { 00:29:47.297 "cntlid": 0, 00:29:47.297 "vendor_id": "0x1b36", 00:29:47.297 "model_number": "QEMU NVMe Ctrl", 00:29:47.297 "serial_number": "12341", 00:29:47.297 "firmware_revision": "8.0.0", 00:29:47.297 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:47.297 "oacs": { 00:29:47.297 "security": 0, 00:29:47.297 "format": 1, 00:29:47.297 "firmware": 0, 00:29:47.297 "ns_manage": 1 00:29:47.297 }, 00:29:47.297 "multi_ctrlr": false, 00:29:47.297 "ana_reporting": false 00:29:47.297 }, 00:29:47.297 "vs": { 00:29:47.297 "nvme_version": "1.4" 00:29:47.297 }, 00:29:47.297 "ns_data": { 00:29:47.297 "id": 1, 00:29:47.297 "can_share": false 00:29:47.297 } 00:29:47.297 } 00:29:47.297 ], 00:29:47.297 "mp_policy": "active_passive" 00:29:47.297 } 00:29:47.297 } 00:29:47.297 ]' 00:29:47.297 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:47.555 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:47.813 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7d231a1f-2b82-4578-983c-9fc88c35f314 00:29:47.813 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:47.813 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d231a1f-2b82-4578-983c-9fc88c35f314 00:29:48.071 18:36:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:48.329 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=524944ab-6629-4540-b75e-a76eaf7a2988 00:29:48.329 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 524944ab-6629-4540-b75e-a76eaf7a2988 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=9ca18199-f45e-44fc-8c3f-e5db36960ba4 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 9ca18199-f45e-44fc-8c3f-e5db36960ba4 ]] 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 9ca18199-f45e-44fc-8c3f-e5db36960ba4 5120 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=9ca18199-f45e-44fc-8c3f-e5db36960ba4 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9ca18199-f45e-44fc-8c3f-e5db36960ba4 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=9ca18199-f45e-44fc-8c3f-e5db36960ba4 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9ca18199-f45e-44fc-8c3f-e5db36960ba4 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:48.895 { 00:29:48.895 "name": "9ca18199-f45e-44fc-8c3f-e5db36960ba4", 00:29:48.895 "aliases": [ 00:29:48.895 "lvs/basen1p0" 00:29:48.895 ], 00:29:48.895 "product_name": "Logical Volume", 00:29:48.895 "block_size": 4096, 00:29:48.895 "num_blocks": 5242880, 00:29:48.895 "uuid": "9ca18199-f45e-44fc-8c3f-e5db36960ba4", 00:29:48.895 "assigned_rate_limits": { 00:29:48.895 "rw_ios_per_sec": 0, 00:29:48.895 "rw_mbytes_per_sec": 0, 00:29:48.895 "r_mbytes_per_sec": 0, 00:29:48.895 "w_mbytes_per_sec": 0 00:29:48.895 }, 00:29:48.895 "claimed": false, 00:29:48.895 "zoned": false, 00:29:48.895 "supported_io_types": { 00:29:48.895 "read": true, 00:29:48.895 "write": true, 00:29:48.895 "unmap": true, 00:29:48.895 "flush": false, 00:29:48.895 "reset": true, 00:29:48.895 "nvme_admin": false, 00:29:48.895 "nvme_io": false, 00:29:48.895 "nvme_io_md": false, 00:29:48.895 "write_zeroes": true, 00:29:48.895 "zcopy": false, 00:29:48.895 "get_zone_info": false, 00:29:48.895 "zone_management": false, 00:29:48.895 "zone_append": false, 00:29:48.895 "compare": false, 00:29:48.895 "compare_and_write": false, 00:29:48.895 "abort": false, 00:29:48.895 "seek_hole": true, 00:29:48.895 "seek_data": true, 00:29:48.895 "copy": false, 00:29:48.895 "nvme_iov_md": false 00:29:48.895 }, 00:29:48.895 "driver_specific": { 00:29:48.895 "lvol": { 00:29:48.895 "lvol_store_uuid": "524944ab-6629-4540-b75e-a76eaf7a2988", 00:29:48.895 "base_bdev": "basen1", 00:29:48.895 "thin_provision": true, 00:29:48.895 "num_allocated_clusters": 0, 00:29:48.895 "snapshot": false, 00:29:48.895 "clone": false, 00:29:48.895 "esnap_clone": false 00:29:48.895 } 00:29:48.895 } 00:29:48.895 } 00:29:48.895 ]' 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:48.895 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:49.205 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:29:49.205 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:29:49.205 18:37:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:29:49.205 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:49.205 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:49.205 18:37:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:49.462 18:37:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:49.462 18:37:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:49.462 18:37:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:49.720 18:37:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:49.720 18:37:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:49.720 18:37:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 9ca18199-f45e-44fc-8c3f-e5db36960ba4 -c cachen1p0 --l2p_dram_limit 2 00:29:49.978 [2024-07-22 18:37:01.830503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.978 [2024-07-22 18:37:01.830822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:49.978 [2024-07-22 18:37:01.831002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:49.978 [2024-07-22 18:37:01.831034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.978 [2024-07-22 18:37:01.831131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.978 [2024-07-22 18:37:01.831159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:49.978 [2024-07-22 18:37:01.831174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:49.978 [2024-07-22 18:37:01.831189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.978 [2024-07-22 18:37:01.831222] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:49.978 [2024-07-22 18:37:01.832404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:49.978 [2024-07-22 18:37:01.832595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.978 [2024-07-22 18:37:01.832809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:49.978 [2024-07-22 18:37:01.832961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.379 ms 00:29:49.978 [2024-07-22 18:37:01.833094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.978 [2024-07-22 18:37:01.833245] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 377feee2-e45f-4f9c-9eef-e0241540f3df 00:29:49.978 [2024-07-22 18:37:01.835098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.978 [2024-07-22 18:37:01.835140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:49.978 [2024-07-22 18:37:01.835163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:49.978 [2024-07-22 18:37:01.835177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.978 [2024-07-22 18:37:01.844912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.978 [2024-07-22 18:37:01.844978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:49.978 [2024-07-22 18:37:01.845002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.639 ms 00:29:49.978 [2024-07-22 18:37:01.845015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.978 [2024-07-22 18:37:01.845098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.978 [2024-07-22 18:37:01.845118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:49.978 [2024-07-22 18:37:01.845135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:49.978 [2024-07-22 18:37:01.845148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.978 [2024-07-22 18:37:01.845261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.978 [2024-07-22 18:37:01.845282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:49.978 [2024-07-22 18:37:01.845298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:49.978 [2024-07-22 18:37:01.845314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.978 [2024-07-22 18:37:01.845353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:49.978 [2024-07-22 18:37:01.850603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.979 [2024-07-22 18:37:01.850654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:49.979 [2024-07-22 18:37:01.850672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.264 ms 00:29:49.979 [2024-07-22 18:37:01.850709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.979 [2024-07-22 18:37:01.850754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.979 [2024-07-22 18:37:01.850774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:49.979 [2024-07-22 18:37:01.850787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:49.979 [2024-07-22 18:37:01.850802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.979 [2024-07-22 18:37:01.850847] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:49.979 [2024-07-22 18:37:01.851014] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:49.979 [2024-07-22 18:37:01.851034] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:49.979 [2024-07-22 18:37:01.851062] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:49.979 [2024-07-22 18:37:01.851078] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851095] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851109] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:49.979 [2024-07-22 18:37:01.851123] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:49.979 [2024-07-22 18:37:01.851140] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:49.979 [2024-07-22 18:37:01.851154] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:49.979 [2024-07-22 18:37:01.851166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.979 [2024-07-22 18:37:01.851181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:49.979 [2024-07-22 18:37:01.851193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:29:49.979 [2024-07-22 18:37:01.851207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.979 [2024-07-22 18:37:01.851300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.979 [2024-07-22 18:37:01.851318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:49.979 [2024-07-22 18:37:01.851331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:29:49.979 [2024-07-22 18:37:01.851345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.979 [2024-07-22 18:37:01.851469] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:49.979 [2024-07-22 18:37:01.851494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:49.979 [2024-07-22 18:37:01.851508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.851536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:49.979 [2024-07-22 18:37:01.851559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.851583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:49.979 [2024-07-22 18:37:01.851598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:49.979 [2024-07-22 18:37:01.851610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:49.979 [2024-07-22 18:37:01.851623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.851635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:49.979 [2024-07-22 18:37:01.851650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:49.979 [2024-07-22 18:37:01.851661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.851675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:49.979 [2024-07-22 18:37:01.851710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:49.979 [2024-07-22 18:37:01.851725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.851737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:49.979 [2024-07-22 18:37:01.851754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:49.979 [2024-07-22 18:37:01.851766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.851782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:49.979 [2024-07-22 18:37:01.851794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:49.979 [2024-07-22 18:37:01.851808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:49.979 [2024-07-22 18:37:01.851833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:49.979 [2024-07-22 18:37:01.851845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:49.979 [2024-07-22 18:37:01.851870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:49.979 [2024-07-22 18:37:01.851884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:49.979 [2024-07-22 18:37:01.851909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:49.979 [2024-07-22 18:37:01.851921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:49.979 [2024-07-22 18:37:01.851946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:49.979 [2024-07-22 18:37:01.851962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.851973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:49.979 [2024-07-22 18:37:01.851987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:49.979 [2024-07-22 18:37:01.851998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.852014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:49.979 [2024-07-22 18:37:01.852026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:49.979 [2024-07-22 18:37:01.852039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.852051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:49.979 [2024-07-22 18:37:01.852064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:49.979 [2024-07-22 18:37:01.852076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.852089] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:49.979 [2024-07-22 18:37:01.852101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:49.979 [2024-07-22 18:37:01.852116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:49.979 [2024-07-22 18:37:01.852128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.979 [2024-07-22 18:37:01.852149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:49.979 [2024-07-22 18:37:01.852161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:49.979 [2024-07-22 18:37:01.852178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:49.979 [2024-07-22 18:37:01.852190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:49.979 [2024-07-22 18:37:01.852204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:49.979 [2024-07-22 18:37:01.852216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:49.979 [2024-07-22 18:37:01.852235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:49.979 [2024-07-22 18:37:01.852251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:49.979 [2024-07-22 18:37:01.852284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:49.979 [2024-07-22 18:37:01.852325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:49.979 [2024-07-22 18:37:01.852337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:49.979 [2024-07-22 18:37:01.852352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:49.979 [2024-07-22 18:37:01.852364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:49.979 [2024-07-22 18:37:01.852459] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:49.979 [2024-07-22 18:37:01.852472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.979 [2024-07-22 18:37:01.852488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.980 [2024-07-22 18:37:01.852500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:49.980 [2024-07-22 18:37:01.852514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:49.980 [2024-07-22 18:37:01.852525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:49.980 [2024-07-22 18:37:01.852541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.980 [2024-07-22 18:37:01.852553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:49.980 [2024-07-22 18:37:01.852568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:29:49.980 [2024-07-22 18:37:01.852580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.980 [2024-07-22 18:37:01.852645] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:49.980 [2024-07-22 18:37:01.852662] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:52.507 [2024-07-22 18:37:04.308243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.308318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:52.507 [2024-07-22 18:37:04.308346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2455.589 ms 00:29:52.507 [2024-07-22 18:37:04.308361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.347446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.347542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:52.507 [2024-07-22 18:37:04.347583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.780 ms 00:29:52.507 [2024-07-22 18:37:04.347606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.347826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.347857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:52.507 [2024-07-22 18:37:04.347885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:52.507 [2024-07-22 18:37:04.347920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.393895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.393982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:52.507 [2024-07-22 18:37:04.394023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.877 ms 00:29:52.507 [2024-07-22 18:37:04.394045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.394145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.394168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:52.507 [2024-07-22 18:37:04.394185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:52.507 [2024-07-22 18:37:04.394198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.394855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.394891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:52.507 [2024-07-22 18:37:04.394912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:29:52.507 [2024-07-22 18:37:04.394925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.394995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.395013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:52.507 [2024-07-22 18:37:04.395034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:52.507 [2024-07-22 18:37:04.395046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.417380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.417449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:52.507 [2024-07-22 18:37:04.417475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.298 ms 00:29:52.507 [2024-07-22 18:37:04.417488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.507 [2024-07-22 18:37:04.434043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:52.507 [2024-07-22 18:37:04.435473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.507 [2024-07-22 18:37:04.435516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:52.507 [2024-07-22 18:37:04.435539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.827 ms 00:29:52.508 [2024-07-22 18:37:04.435556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.508 [2024-07-22 18:37:04.476714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.508 [2024-07-22 18:37:04.476799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:52.508 [2024-07-22 18:37:04.476824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.087 ms 00:29:52.508 [2024-07-22 18:37:04.476840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.508 [2024-07-22 18:37:04.476980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.508 [2024-07-22 18:37:04.477009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:52.508 [2024-07-22 18:37:04.477024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:29:52.508 [2024-07-22 18:37:04.477042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.508 [2024-07-22 18:37:04.507996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.508 [2024-07-22 18:37:04.508073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:52.508 [2024-07-22 18:37:04.508097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.868 ms 00:29:52.508 [2024-07-22 18:37:04.508113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.767 [2024-07-22 18:37:04.539730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.767 [2024-07-22 18:37:04.539818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:52.767 [2024-07-22 18:37:04.539842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.542 ms 00:29:52.767 [2024-07-22 18:37:04.539858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.767 [2024-07-22 18:37:04.540769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.767 [2024-07-22 18:37:04.540808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:52.767 [2024-07-22 18:37:04.540825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.840 ms 00:29:52.767 [2024-07-22 18:37:04.540846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.767 [2024-07-22 18:37:04.640717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.767 [2024-07-22 18:37:04.640807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:52.767 [2024-07-22 18:37:04.640831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.754 ms 00:29:52.767 [2024-07-22 18:37:04.640852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.767 [2024-07-22 18:37:04.675988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.767 [2024-07-22 18:37:04.676065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:52.767 [2024-07-22 18:37:04.676089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.052 ms 00:29:52.767 [2024-07-22 18:37:04.676107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.767 [2024-07-22 18:37:04.711466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.767 [2024-07-22 18:37:04.711555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:52.767 [2024-07-22 18:37:04.711592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.278 ms 00:29:52.768 [2024-07-22 18:37:04.711621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.768 [2024-07-22 18:37:04.743356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.768 [2024-07-22 18:37:04.743444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:52.768 [2024-07-22 18:37:04.743468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.626 ms 00:29:52.768 [2024-07-22 18:37:04.743484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.768 [2024-07-22 18:37:04.743558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.768 [2024-07-22 18:37:04.743588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:52.768 [2024-07-22 18:37:04.743605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:52.768 [2024-07-22 18:37:04.743634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.768 [2024-07-22 18:37:04.743789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.768 [2024-07-22 18:37:04.743827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:52.768 [2024-07-22 18:37:04.743846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:52.768 [2024-07-22 18:37:04.743861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.768 [2024-07-22 18:37:04.745478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2914.174 ms, result 0 00:29:52.768 { 00:29:52.768 "name": "ftl", 00:29:52.768 "uuid": "377feee2-e45f-4f9c-9eef-e0241540f3df" 00:29:52.768 } 00:29:52.768 18:37:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:53.027 [2024-07-22 18:37:05.024227] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.286 18:37:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:53.544 18:37:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:53.804 [2024-07-22 18:37:05.636985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:53.804 18:37:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:54.062 [2024-07-22 18:37:05.923809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:54.062 18:37:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:54.321 Fill FTL, iteration 1 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86133 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86133 /var/tmp/spdk.tgt.sock 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86133 ']' 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.321 18:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:54.580 [2024-07-22 18:37:06.401126] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:54.580 [2024-07-22 18:37:06.401280] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86133 ] 00:29:54.580 [2024-07-22 18:37:06.570421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.839 [2024-07-22 18:37:06.842469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.773 18:37:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.773 18:37:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:55.773 18:37:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:56.032 ftln1 00:29:56.032 18:37:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:56.032 18:37:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86133 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86133 ']' 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86133 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86133 00:29:56.330 killing process with pid 86133 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86133' 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86133 00:29:56.330 18:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86133 00:29:58.861 18:37:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:58.861 18:37:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:58.861 [2024-07-22 18:37:10.762450] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:58.861 [2024-07-22 18:37:10.762632] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86186 ] 00:29:59.120 [2024-07-22 18:37:10.942086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.379 [2024-07-22 18:37:11.226540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.951  Copying: 212/1024 [MB] (212 MBps) Copying: 428/1024 [MB] (216 MBps) Copying: 647/1024 [MB] (219 MBps) Copying: 865/1024 [MB] (218 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:30:05.951 00:30:05.951 Calculate MD5 checksum, iteration 1 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:05.951 18:37:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:05.951 [2024-07-22 18:37:17.726791] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:05.951 [2024-07-22 18:37:17.726931] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86256 ] 00:30:05.951 [2024-07-22 18:37:17.892079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.210 [2024-07-22 18:37:18.135219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.554  Copying: 525/1024 [MB] (525 MBps) Copying: 1020/1024 [MB] (495 MBps) Copying: 1024/1024 [MB] (average 509 MBps) 00:30:09.554 00:30:09.812 18:37:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:09.812 18:37:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:12.360 Fill FTL, iteration 2 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d3425bdcadf14c2bccef9fed01c36664 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:12.360 18:37:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:12.360 [2024-07-22 18:37:23.869121] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:12.360 [2024-07-22 18:37:23.869298] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86323 ] 00:30:12.360 [2024-07-22 18:37:24.039135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.360 [2024-07-22 18:37:24.276188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.857  Copying: 209/1024 [MB] (209 MBps) Copying: 413/1024 [MB] (204 MBps) Copying: 623/1024 [MB] (210 MBps) Copying: 831/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 208 MBps) 00:30:18.858 00:30:18.858 Calculate MD5 checksum, iteration 2 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:18.858 18:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:19.115 [2024-07-22 18:37:30.958189] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:19.115 [2024-07-22 18:37:30.958381] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86393 ] 00:30:19.373 [2024-07-22 18:37:31.132125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.373 [2024-07-22 18:37:31.378613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.616  Copying: 531/1024 [MB] (531 MBps) Copying: 1024/1024 [MB] (average 529 MBps) 00:30:23.616 00:30:23.875 18:37:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:23.875 18:37:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:26.408 18:37:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:26.408 18:37:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=bcb93bf20cefad88d6613c875f9cf872 00:30:26.408 18:37:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:26.408 18:37:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:26.408 18:37:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:26.408 [2024-07-22 18:37:38.048815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.408 [2024-07-22 18:37:38.048882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:26.408 [2024-07-22 18:37:38.048905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:26.408 [2024-07-22 18:37:38.048918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.408 [2024-07-22 18:37:38.048963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.408 [2024-07-22 18:37:38.048984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:26.408 [2024-07-22 18:37:38.048998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:26.408 [2024-07-22 18:37:38.049010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.408 [2024-07-22 18:37:38.049052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.408 [2024-07-22 18:37:38.049067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:26.408 [2024-07-22 18:37:38.049079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:26.408 [2024-07-22 18:37:38.049091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.408 [2024-07-22 18:37:38.049181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.353 ms, result 0 00:30:26.408 true 00:30:26.408 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.408 { 00:30:26.408 "name": "ftl", 00:30:26.408 "properties": [ 00:30:26.408 { 00:30:26.408 "name": "superblock_version", 00:30:26.408 "value": 5, 00:30:26.408 "read-only": true 00:30:26.408 }, 00:30:26.408 { 00:30:26.408 "name": "base_device", 00:30:26.408 "bands": [ 00:30:26.408 { 00:30:26.408 "id": 0, 00:30:26.408 "state": "FREE", 00:30:26.408 "validity": 0.0 00:30:26.408 }, 00:30:26.408 { 00:30:26.409 "id": 1, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 2, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 3, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 4, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 5, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 6, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 7, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 8, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 9, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 10, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 11, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 12, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 13, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 14, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 15, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 16, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 17, 00:30:26.409 "state": "FREE", 00:30:26.409 "validity": 0.0 00:30:26.409 } 00:30:26.409 ], 00:30:26.409 "read-only": true 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "name": "cache_device", 00:30:26.409 "type": "bdev", 00:30:26.409 "chunks": [ 00:30:26.409 { 00:30:26.409 "id": 0, 00:30:26.409 "state": "INACTIVE", 00:30:26.409 "utilization": 0.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 1, 00:30:26.409 "state": "CLOSED", 00:30:26.409 "utilization": 1.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 2, 00:30:26.409 "state": "CLOSED", 00:30:26.409 "utilization": 1.0 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 3, 00:30:26.409 "state": "OPEN", 00:30:26.409 "utilization": 0.001953125 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "id": 4, 00:30:26.409 "state": "OPEN", 00:30:26.409 "utilization": 0.0 00:30:26.409 } 00:30:26.409 ], 00:30:26.409 "read-only": true 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "name": "verbose_mode", 00:30:26.409 "value": true, 00:30:26.409 "unit": "", 00:30:26.409 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:26.409 }, 00:30:26.409 { 00:30:26.409 "name": "prep_upgrade_on_shutdown", 00:30:26.409 "value": false, 00:30:26.409 "unit": "", 00:30:26.409 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:26.409 } 00:30:26.409 ] 00:30:26.409 } 00:30:26.409 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:26.668 [2024-07-22 18:37:38.549396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.668 [2024-07-22 18:37:38.549457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:26.668 [2024-07-22 18:37:38.549478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:26.668 [2024-07-22 18:37:38.549490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.668 [2024-07-22 18:37:38.549527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.668 [2024-07-22 18:37:38.549542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:26.668 [2024-07-22 18:37:38.549555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:26.668 [2024-07-22 18:37:38.549566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.668 [2024-07-22 18:37:38.549592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.668 [2024-07-22 18:37:38.549605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:26.668 [2024-07-22 18:37:38.549617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:26.668 [2024-07-22 18:37:38.549628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.668 [2024-07-22 18:37:38.549721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.294 ms, result 0 00:30:26.668 true 00:30:26.668 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:26.668 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.668 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:26.926 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:26.926 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:26.926 18:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:27.185 [2024-07-22 18:37:39.074055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.185 [2024-07-22 18:37:39.074116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:27.185 [2024-07-22 18:37:39.074137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:27.185 [2024-07-22 18:37:39.074149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.185 [2024-07-22 18:37:39.074184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.185 [2024-07-22 18:37:39.074200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:27.185 [2024-07-22 18:37:39.074213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:27.185 [2024-07-22 18:37:39.074232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.185 [2024-07-22 18:37:39.074259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.185 [2024-07-22 18:37:39.074272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:27.185 [2024-07-22 18:37:39.074284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:27.185 [2024-07-22 18:37:39.074295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.185 [2024-07-22 18:37:39.074370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.304 ms, result 0 00:30:27.185 true 00:30:27.185 18:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:27.445 { 00:30:27.445 "name": "ftl", 00:30:27.445 "properties": [ 00:30:27.445 { 00:30:27.445 "name": "superblock_version", 00:30:27.445 "value": 5, 00:30:27.445 "read-only": true 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "name": "base_device", 00:30:27.445 "bands": [ 00:30:27.445 { 00:30:27.445 "id": 0, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 1, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 2, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 3, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 4, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 5, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 6, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 7, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 8, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 9, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 10, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 11, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 12, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 13, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 14, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.445 "id": 15, 00:30:27.445 "state": "FREE", 00:30:27.445 "validity": 0.0 00:30:27.445 }, 00:30:27.445 { 00:30:27.446 "id": 16, 00:30:27.446 "state": "FREE", 00:30:27.446 "validity": 0.0 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "id": 17, 00:30:27.446 "state": "FREE", 00:30:27.446 "validity": 0.0 00:30:27.446 } 00:30:27.446 ], 00:30:27.446 "read-only": true 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "name": "cache_device", 00:30:27.446 "type": "bdev", 00:30:27.446 "chunks": [ 00:30:27.446 { 00:30:27.446 "id": 0, 00:30:27.446 "state": "INACTIVE", 00:30:27.446 "utilization": 0.0 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "id": 1, 00:30:27.446 "state": "CLOSED", 00:30:27.446 "utilization": 1.0 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "id": 2, 00:30:27.446 "state": "CLOSED", 00:30:27.446 "utilization": 1.0 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "id": 3, 00:30:27.446 "state": "OPEN", 00:30:27.446 "utilization": 0.001953125 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "id": 4, 00:30:27.446 "state": "OPEN", 00:30:27.446 "utilization": 0.0 00:30:27.446 } 00:30:27.446 ], 00:30:27.446 "read-only": true 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "name": "verbose_mode", 00:30:27.446 "value": true, 00:30:27.446 "unit": "", 00:30:27.446 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:27.446 }, 00:30:27.446 { 00:30:27.446 "name": "prep_upgrade_on_shutdown", 00:30:27.446 "value": true, 00:30:27.446 "unit": "", 00:30:27.446 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:27.446 } 00:30:27.446 ] 00:30:27.446 } 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86009 ]] 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86009 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86009 ']' 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86009 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86009 00:30:27.446 killing process with pid 86009 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86009' 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86009 00:30:27.446 18:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86009 00:30:28.380 [2024-07-22 18:37:40.392591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:28.655 [2024-07-22 18:37:40.411167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.655 [2024-07-22 18:37:40.411219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:28.655 [2024-07-22 18:37:40.411241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:28.655 [2024-07-22 18:37:40.411254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.655 [2024-07-22 18:37:40.411287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:28.655 [2024-07-22 18:37:40.414866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.655 [2024-07-22 18:37:40.414910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:28.655 [2024-07-22 18:37:40.414927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.558 ms 00:30:28.655 [2024-07-22 18:37:40.414944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.828657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.828753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:38.624 [2024-07-22 18:37:48.828777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8413.672 ms 00:30:38.624 [2024-07-22 18:37:48.828798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.830108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.830152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:38.624 [2024-07-22 18:37:48.830168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.285 ms 00:30:38.624 [2024-07-22 18:37:48.830181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.831390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.831422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:38.624 [2024-07-22 18:37:48.831437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:30:38.624 [2024-07-22 18:37:48.831456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.844216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.844256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:38.624 [2024-07-22 18:37:48.844289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.716 ms 00:30:38.624 [2024-07-22 18:37:48.844301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.851873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.851917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:38.624 [2024-07-22 18:37:48.851934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.529 ms 00:30:38.624 [2024-07-22 18:37:48.851947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.852067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.852086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:38.624 [2024-07-22 18:37:48.852108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:30:38.624 [2024-07-22 18:37:48.852120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.864186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.864223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:30:38.624 [2024-07-22 18:37:48.864255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.043 ms 00:30:38.624 [2024-07-22 18:37:48.864267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.876381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.876424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:30:38.624 [2024-07-22 18:37:48.876439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.074 ms 00:30:38.624 [2024-07-22 18:37:48.876450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.888376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.888412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:38.624 [2024-07-22 18:37:48.888443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.886 ms 00:30:38.624 [2024-07-22 18:37:48.888455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.900304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.624 [2024-07-22 18:37:48.900343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:38.624 [2024-07-22 18:37:48.900359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.754 ms 00:30:38.624 [2024-07-22 18:37:48.900370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.624 [2024-07-22 18:37:48.900410] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:38.624 [2024-07-22 18:37:48.900431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:38.624 [2024-07-22 18:37:48.900446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:38.624 [2024-07-22 18:37:48.900458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:38.624 [2024-07-22 18:37:48.900471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:38.624 [2024-07-22 18:37:48.900492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:38.625 [2024-07-22 18:37:48.900677] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:38.625 [2024-07-22 18:37:48.900707] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 377feee2-e45f-4f9c-9eef-e0241540f3df 00:30:38.625 [2024-07-22 18:37:48.900720] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:38.625 [2024-07-22 18:37:48.900731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:38.625 [2024-07-22 18:37:48.900742] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:38.625 [2024-07-22 18:37:48.900755] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:38.625 [2024-07-22 18:37:48.900766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:38.625 [2024-07-22 18:37:48.900784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:38.625 [2024-07-22 18:37:48.900795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:38.625 [2024-07-22 18:37:48.900805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:38.625 [2024-07-22 18:37:48.900817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:38.625 [2024-07-22 18:37:48.900836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.625 [2024-07-22 18:37:48.900848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:38.625 [2024-07-22 18:37:48.900861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:30:38.625 [2024-07-22 18:37:48.900883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:48.917932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.625 [2024-07-22 18:37:48.917979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:38.625 [2024-07-22 18:37:48.917997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.007 ms 00:30:38.625 [2024-07-22 18:37:48.918018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:48.918492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.625 [2024-07-22 18:37:48.918507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:38.625 [2024-07-22 18:37:48.918521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.435 ms 00:30:38.625 [2024-07-22 18:37:48.918532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:48.971831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:48.971896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:38.625 [2024-07-22 18:37:48.971922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:48.971934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:48.971996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:48.972012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:38.625 [2024-07-22 18:37:48.972026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:48.972038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:48.972184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:48.972203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:38.625 [2024-07-22 18:37:48.972216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:48.972231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:48.972256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:48.972268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:38.625 [2024-07-22 18:37:48.972280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:48.972291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.077427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.077504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:38.625 [2024-07-22 18:37:49.077531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.077542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.169352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.169436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:38.625 [2024-07-22 18:37:49.169456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.169469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.169571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.169589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:38.625 [2024-07-22 18:37:49.169603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.169615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.169709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.169728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:38.625 [2024-07-22 18:37:49.169741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.169753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.169874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.169892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:38.625 [2024-07-22 18:37:49.169904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.169916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.169963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.169985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:38.625 [2024-07-22 18:37:49.169998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.170009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.170067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.170098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:38.625 [2024-07-22 18:37:49.170110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.170121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.170208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.625 [2024-07-22 18:37:49.170235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:38.625 [2024-07-22 18:37:49.170248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.625 [2024-07-22 18:37:49.170261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.625 [2024-07-22 18:37:49.170419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8759.219 ms, result 0 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86604 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:41.182 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86604 00:30:41.183 18:37:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:41.183 18:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86604 ']' 00:30:41.183 18:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.183 18:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:41.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.183 18:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.183 18:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:41.183 18:37:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:41.183 [2024-07-22 18:37:52.753399] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:41.183 [2024-07-22 18:37:52.753548] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86604 ] 00:30:41.183 [2024-07-22 18:37:52.915977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.183 [2024-07-22 18:37:53.155221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.125 [2024-07-22 18:37:54.045931] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.125 [2024-07-22 18:37:54.046009] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.384 [2024-07-22 18:37:54.194974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.384 [2024-07-22 18:37:54.195036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:42.384 [2024-07-22 18:37:54.195062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:42.384 [2024-07-22 18:37:54.195074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.384 [2024-07-22 18:37:54.195144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.384 [2024-07-22 18:37:54.195164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:42.384 [2024-07-22 18:37:54.195188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:42.384 [2024-07-22 18:37:54.195200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.384 [2024-07-22 18:37:54.195233] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:42.384 [2024-07-22 18:37:54.196171] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:42.384 [2024-07-22 18:37:54.196205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.384 [2024-07-22 18:37:54.196219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:42.384 [2024-07-22 18:37:54.196231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.979 ms 00:30:42.384 [2024-07-22 18:37:54.196243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.198224] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:42.385 [2024-07-22 18:37:54.214939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.214987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:42.385 [2024-07-22 18:37:54.215006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.715 ms 00:30:42.385 [2024-07-22 18:37:54.215018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.215110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.215129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:42.385 [2024-07-22 18:37:54.215143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:42.385 [2024-07-22 18:37:54.215154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.223863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.223909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:42.385 [2024-07-22 18:37:54.223926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.599 ms 00:30:42.385 [2024-07-22 18:37:54.223938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.224030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.224050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:42.385 [2024-07-22 18:37:54.224063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:42.385 [2024-07-22 18:37:54.224079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.224151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.224179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:42.385 [2024-07-22 18:37:54.224191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:42.385 [2024-07-22 18:37:54.224203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.224244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:42.385 [2024-07-22 18:37:54.229267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.229305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:42.385 [2024-07-22 18:37:54.229321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.033 ms 00:30:42.385 [2024-07-22 18:37:54.229333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.229388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.229406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:42.385 [2024-07-22 18:37:54.229419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:42.385 [2024-07-22 18:37:54.229435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.229506] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:42.385 [2024-07-22 18:37:54.229541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:42.385 [2024-07-22 18:37:54.229585] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:42.385 [2024-07-22 18:37:54.229605] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:30:42.385 [2024-07-22 18:37:54.229735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:42.385 [2024-07-22 18:37:54.229754] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:42.385 [2024-07-22 18:37:54.229780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:30:42.385 [2024-07-22 18:37:54.229795] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:42.385 [2024-07-22 18:37:54.229809] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:42.385 [2024-07-22 18:37:54.229822] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:42.385 [2024-07-22 18:37:54.229833] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:42.385 [2024-07-22 18:37:54.229844] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:42.385 [2024-07-22 18:37:54.229855] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:42.385 [2024-07-22 18:37:54.229868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.229879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:42.385 [2024-07-22 18:37:54.229892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:30:42.385 [2024-07-22 18:37:54.229903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.230006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.385 [2024-07-22 18:37:54.230023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:42.385 [2024-07-22 18:37:54.230041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:30:42.385 [2024-07-22 18:37:54.230053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.385 [2024-07-22 18:37:54.230166] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:42.385 [2024-07-22 18:37:54.230183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:42.385 [2024-07-22 18:37:54.230195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:42.385 [2024-07-22 18:37:54.230207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.385 [2024-07-22 18:37:54.230219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:42.385 [2024-07-22 18:37:54.230229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:42.385 [2024-07-22 18:37:54.230240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:42.385 [2024-07-22 18:37:54.230251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:42.385 [2024-07-22 18:37:54.230263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:42.385 [2024-07-22 18:37:54.230274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.385 [2024-07-22 18:37:54.230284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:42.385 [2024-07-22 18:37:54.230295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:42.385 [2024-07-22 18:37:54.230305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.385 [2024-07-22 18:37:54.230316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:42.385 [2024-07-22 18:37:54.230326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:42.385 [2024-07-22 18:37:54.230344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.385 [2024-07-22 18:37:54.230355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:42.385 [2024-07-22 18:37:54.230366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:42.385 [2024-07-22 18:37:54.230376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.385 [2024-07-22 18:37:54.230387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:42.385 [2024-07-22 18:37:54.230398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:42.385 [2024-07-22 18:37:54.230409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.385 [2024-07-22 18:37:54.230419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:42.385 [2024-07-22 18:37:54.230430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:42.385 [2024-07-22 18:37:54.230441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.385 [2024-07-22 18:37:54.230451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:42.385 [2024-07-22 18:37:54.230462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:42.385 [2024-07-22 18:37:54.230472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.385 [2024-07-22 18:37:54.230483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:42.385 [2024-07-22 18:37:54.230494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:42.385 [2024-07-22 18:37:54.230504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.385 [2024-07-22 18:37:54.230515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:42.385 [2024-07-22 18:37:54.230526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:42.385 [2024-07-22 18:37:54.230537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.385 [2024-07-22 18:37:54.230547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:42.385 [2024-07-22 18:37:54.230558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:42.385 [2024-07-22 18:37:54.230569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.386 [2024-07-22 18:37:54.230580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:42.386 [2024-07-22 18:37:54.230590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:42.386 [2024-07-22 18:37:54.230601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.386 [2024-07-22 18:37:54.230612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:42.386 [2024-07-22 18:37:54.230622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:42.386 [2024-07-22 18:37:54.230632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.386 [2024-07-22 18:37:54.230642] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:42.386 [2024-07-22 18:37:54.230659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:42.386 [2024-07-22 18:37:54.230671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:42.386 [2024-07-22 18:37:54.230697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.386 [2024-07-22 18:37:54.230719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:42.386 [2024-07-22 18:37:54.230730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:42.386 [2024-07-22 18:37:54.230742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:42.386 [2024-07-22 18:37:54.230753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:42.386 [2024-07-22 18:37:54.230776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:42.386 [2024-07-22 18:37:54.230788] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:42.386 [2024-07-22 18:37:54.230800] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:42.386 [2024-07-22 18:37:54.230815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:42.386 [2024-07-22 18:37:54.230840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:42.386 [2024-07-22 18:37:54.230876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:42.386 [2024-07-22 18:37:54.230888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:42.386 [2024-07-22 18:37:54.230900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:42.386 [2024-07-22 18:37:54.230912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.230983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:42.386 [2024-07-22 18:37:54.230994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:42.386 [2024-07-22 18:37:54.231007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.231020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:42.386 [2024-07-22 18:37:54.231033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:42.386 [2024-07-22 18:37:54.231045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:42.386 [2024-07-22 18:37:54.231057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:42.386 [2024-07-22 18:37:54.231070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.386 [2024-07-22 18:37:54.231082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:42.386 [2024-07-22 18:37:54.231094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.968 ms 00:30:42.386 [2024-07-22 18:37:54.231110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.386 [2024-07-22 18:37:54.231183] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:42.386 [2024-07-22 18:37:54.231202] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:44.919 [2024-07-22 18:37:56.689081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.689346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:44.920 [2024-07-22 18:37:56.689470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2457.896 ms 00:30:44.920 [2024-07-22 18:37:56.689521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.729176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.729440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:44.920 [2024-07-22 18:37:56.729567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.233 ms 00:30:44.920 [2024-07-22 18:37:56.729724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.729909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.729965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:44.920 [2024-07-22 18:37:56.730073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:44.920 [2024-07-22 18:37:56.730121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.773293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.773549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:44.920 [2024-07-22 18:37:56.773674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.072 ms 00:30:44.920 [2024-07-22 18:37:56.773742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.773921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.774042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:44.920 [2024-07-22 18:37:56.774159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.920 [2024-07-22 18:37:56.774206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.774964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.775087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:44.920 [2024-07-22 18:37:56.775194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.564 ms 00:30:44.920 [2024-07-22 18:37:56.775301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.775418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.775545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:44.920 [2024-07-22 18:37:56.775643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:44.920 [2024-07-22 18:37:56.775722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.796689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.796913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:44.920 [2024-07-22 18:37:56.797030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.897 ms 00:30:44.920 [2024-07-22 18:37:56.797090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.814547] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:44.920 [2024-07-22 18:37:56.814740] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:44.920 [2024-07-22 18:37:56.814901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.815010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:44.920 [2024-07-22 18:37:56.815059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.596 ms 00:30:44.920 [2024-07-22 18:37:56.815150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.833128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.833304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:44.920 [2024-07-22 18:37:56.833415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.811 ms 00:30:44.920 [2024-07-22 18:37:56.833462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.848456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.848501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:44.920 [2024-07-22 18:37:56.848535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.932 ms 00:30:44.920 [2024-07-22 18:37:56.848546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.863641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.863702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:44.920 [2024-07-22 18:37:56.863721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.045 ms 00:30:44.920 [2024-07-22 18:37:56.863732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.920 [2024-07-22 18:37:56.864608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.920 [2024-07-22 18:37:56.864646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:44.920 [2024-07-22 18:37:56.864662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.751 ms 00:30:44.920 [2024-07-22 18:37:56.864674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.179 [2024-07-22 18:37:56.958125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.179 [2024-07-22 18:37:56.958189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:45.179 [2024-07-22 18:37:56.958211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.400 ms 00:30:45.179 [2024-07-22 18:37:56.958224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:56.970892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:45.180 [2024-07-22 18:37:56.972137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.180 [2024-07-22 18:37:56.972171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:45.180 [2024-07-22 18:37:56.972189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.835 ms 00:30:45.180 [2024-07-22 18:37:56.972208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:56.972332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.180 [2024-07-22 18:37:56.972353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:45.180 [2024-07-22 18:37:56.972367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:45.180 [2024-07-22 18:37:56.972379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:56.972461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.180 [2024-07-22 18:37:56.972479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:45.180 [2024-07-22 18:37:56.972493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:45.180 [2024-07-22 18:37:56.972505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:56.972548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.180 [2024-07-22 18:37:56.972564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:45.180 [2024-07-22 18:37:56.972577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:45.180 [2024-07-22 18:37:56.972588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:56.972633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:45.180 [2024-07-22 18:37:56.972650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.180 [2024-07-22 18:37:56.972662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:45.180 [2024-07-22 18:37:56.972674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:45.180 [2024-07-22 18:37:56.972703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:57.004019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.180 [2024-07-22 18:37:57.004070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:45.180 [2024-07-22 18:37:57.004089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.285 ms 00:30:45.180 [2024-07-22 18:37:57.004102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:57.004195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.180 [2024-07-22 18:37:57.004213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:45.180 [2024-07-22 18:37:57.004227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:30:45.180 [2024-07-22 18:37:57.004239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.180 [2024-07-22 18:37:57.005663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2810.164 ms, result 0 00:30:45.180 [2024-07-22 18:37:57.020467] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.180 [2024-07-22 18:37:57.036470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:45.180 [2024-07-22 18:37:57.046040] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:45.438 18:37:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.438 18:37:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:30:45.438 18:37:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:45.438 18:37:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:45.438 18:37:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:45.697 [2024-07-22 18:37:57.490429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.697 [2024-07-22 18:37:57.490501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:45.697 [2024-07-22 18:37:57.490539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:45.697 [2024-07-22 18:37:57.490558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.697 [2024-07-22 18:37:57.490622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.697 [2024-07-22 18:37:57.490638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:45.697 [2024-07-22 18:37:57.490650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:45.697 [2024-07-22 18:37:57.490661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.697 [2024-07-22 18:37:57.490686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.697 [2024-07-22 18:37:57.490701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:45.697 [2024-07-22 18:37:57.490712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:45.698 [2024-07-22 18:37:57.490741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.698 [2024-07-22 18:37:57.490856] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.413 ms, result 0 00:30:45.698 true 00:30:45.698 18:37:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:45.957 { 00:30:45.957 "name": "ftl", 00:30:45.957 "properties": [ 00:30:45.957 { 00:30:45.957 "name": "superblock_version", 00:30:45.957 "value": 5, 00:30:45.957 "read-only": true 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "name": "base_device", 00:30:45.957 "bands": [ 00:30:45.957 { 00:30:45.957 "id": 0, 00:30:45.957 "state": "CLOSED", 00:30:45.957 "validity": 1.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 1, 00:30:45.957 "state": "CLOSED", 00:30:45.957 "validity": 1.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 2, 00:30:45.957 "state": "CLOSED", 00:30:45.957 "validity": 0.007843137254901933 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 3, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 4, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 5, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 6, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 7, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 8, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 9, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 10, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 11, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 12, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 13, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 14, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 15, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 16, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 17, 00:30:45.957 "state": "FREE", 00:30:45.957 "validity": 0.0 00:30:45.957 } 00:30:45.957 ], 00:30:45.957 "read-only": true 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "name": "cache_device", 00:30:45.957 "type": "bdev", 00:30:45.957 "chunks": [ 00:30:45.957 { 00:30:45.957 "id": 0, 00:30:45.957 "state": "INACTIVE", 00:30:45.957 "utilization": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 1, 00:30:45.957 "state": "OPEN", 00:30:45.957 "utilization": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 2, 00:30:45.957 "state": "OPEN", 00:30:45.957 "utilization": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 3, 00:30:45.957 "state": "FREE", 00:30:45.957 "utilization": 0.0 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "id": 4, 00:30:45.957 "state": "FREE", 00:30:45.957 "utilization": 0.0 00:30:45.957 } 00:30:45.957 ], 00:30:45.957 "read-only": true 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "name": "verbose_mode", 00:30:45.957 "value": true, 00:30:45.957 "unit": "", 00:30:45.957 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:45.957 }, 00:30:45.957 { 00:30:45.957 "name": "prep_upgrade_on_shutdown", 00:30:45.957 "value": false, 00:30:45.957 "unit": "", 00:30:45.957 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:45.957 } 00:30:45.957 ] 00:30:45.957 } 00:30:45.957 18:37:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:45.957 18:37:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:45.957 18:37:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:46.217 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:46.217 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:46.217 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:46.217 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:46.217 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:46.476 Validate MD5 checksum, iteration 1 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:46.476 18:37:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:46.476 [2024-07-22 18:37:58.409715] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:46.476 [2024-07-22 18:37:58.409859] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86677 ] 00:30:46.735 [2024-07-22 18:37:58.571554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.994 [2024-07-22 18:37:58.809403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.145  Copying: 529/1024 [MB] (529 MBps) Copying: 999/1024 [MB] (470 MBps) Copying: 1024/1024 [MB] (average 498 MBps) 00:30:51.145 00:30:51.145 18:38:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:51.146 18:38:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:53.680 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:53.680 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d3425bdcadf14c2bccef9fed01c36664 00:30:53.680 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d3425bdcadf14c2bccef9fed01c36664 != \d\3\4\2\5\b\d\c\a\d\f\1\4\c\2\b\c\c\e\f\9\f\e\d\0\1\c\3\6\6\6\4 ]] 00:30:53.680 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:53.680 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:53.680 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:53.680 Validate MD5 checksum, iteration 2 00:30:53.681 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:53.681 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:53.681 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:53.681 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:53.681 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:53.681 18:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:53.681 [2024-07-22 18:38:05.444601] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:53.681 [2024-07-22 18:38:05.444976] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86751 ] 00:30:53.681 [2024-07-22 18:38:05.612086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.939 [2024-07-22 18:38:05.911693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.984  Copying: 433/1024 [MB] (433 MBps) Copying: 841/1024 [MB] (408 MBps) Copying: 1024/1024 [MB] (average 432 MBps) 00:30:58.984 00:30:58.984 18:38:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:58.984 18:38:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bcb93bf20cefad88d6613c875f9cf872 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bcb93bf20cefad88d6613c875f9cf872 != \b\c\b\9\3\b\f\2\0\c\e\f\a\d\8\8\d\6\6\1\3\c\8\7\5\f\9\c\f\8\7\2 ]] 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86604 ]] 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86604 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:01.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86831 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86831 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86831 ']' 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.516 18:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:01.516 [2024-07-22 18:38:13.170557] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:01.516 [2024-07-22 18:38:13.171029] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86831 ] 00:31:01.516 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86604 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:01.516 [2024-07-22 18:38:13.343147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.774 [2024-07-22 18:38:13.586938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.748 [2024-07-22 18:38:14.471959] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:02.748 [2024-07-22 18:38:14.472043] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:02.748 [2024-07-22 18:38:14.621002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.621080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:02.748 [2024-07-22 18:38:14.621107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:02.748 [2024-07-22 18:38:14.621121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.621200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.621220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:02.748 [2024-07-22 18:38:14.621234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:31:02.748 [2024-07-22 18:38:14.621247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.621281] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:02.748 [2024-07-22 18:38:14.622306] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:02.748 [2024-07-22 18:38:14.622347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.622363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:02.748 [2024-07-22 18:38:14.622377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:31:02.748 [2024-07-22 18:38:14.622389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.622938] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:02.748 [2024-07-22 18:38:14.644937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.645006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:02.748 [2024-07-22 18:38:14.645029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.998 ms 00:31:02.748 [2024-07-22 18:38:14.645053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.657784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.657855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:02.748 [2024-07-22 18:38:14.657877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:31:02.748 [2024-07-22 18:38:14.657890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.658459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.658487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:02.748 [2024-07-22 18:38:14.658508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:31:02.748 [2024-07-22 18:38:14.658521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.658622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.658650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:02.748 [2024-07-22 18:38:14.658665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:31:02.748 [2024-07-22 18:38:14.658677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.658771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.658791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:02.748 [2024-07-22 18:38:14.658805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:02.748 [2024-07-22 18:38:14.658822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.658864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:02.748 [2024-07-22 18:38:14.663258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.663301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:02.748 [2024-07-22 18:38:14.663319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.405 ms 00:31:02.748 [2024-07-22 18:38:14.663331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.663381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.663409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:02.748 [2024-07-22 18:38:14.663423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:02.748 [2024-07-22 18:38:14.663435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.748 [2024-07-22 18:38:14.663490] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:02.748 [2024-07-22 18:38:14.663524] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:02.748 [2024-07-22 18:38:14.663571] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:02.748 [2024-07-22 18:38:14.663595] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:31:02.748 [2024-07-22 18:38:14.663723] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:02.748 [2024-07-22 18:38:14.663752] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:02.748 [2024-07-22 18:38:14.663778] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:31:02.748 [2024-07-22 18:38:14.663803] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:02.748 [2024-07-22 18:38:14.663828] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:02.748 [2024-07-22 18:38:14.663849] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:02.748 [2024-07-22 18:38:14.663868] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:02.748 [2024-07-22 18:38:14.663896] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:02.748 [2024-07-22 18:38:14.663917] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:02.748 [2024-07-22 18:38:14.663941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.748 [2024-07-22 18:38:14.663964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:02.749 [2024-07-22 18:38:14.663996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.454 ms 00:31:02.749 [2024-07-22 18:38:14.664019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.749 [2024-07-22 18:38:14.664152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.749 [2024-07-22 18:38:14.664187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:02.749 [2024-07-22 18:38:14.664203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:31:02.749 [2024-07-22 18:38:14.664215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.749 [2024-07-22 18:38:14.664348] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:02.749 [2024-07-22 18:38:14.664368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:02.749 [2024-07-22 18:38:14.664382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:02.749 [2024-07-22 18:38:14.664395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:02.749 [2024-07-22 18:38:14.664418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:02.749 [2024-07-22 18:38:14.664441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:02.749 [2024-07-22 18:38:14.664453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:02.749 [2024-07-22 18:38:14.664464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:02.749 [2024-07-22 18:38:14.664486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:02.749 [2024-07-22 18:38:14.664506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:02.749 [2024-07-22 18:38:14.664530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:02.749 [2024-07-22 18:38:14.664541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:02.749 [2024-07-22 18:38:14.664564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:02.749 [2024-07-22 18:38:14.664576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:02.749 [2024-07-22 18:38:14.664598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:02.749 [2024-07-22 18:38:14.664610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.749 [2024-07-22 18:38:14.664622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:02.749 [2024-07-22 18:38:14.664633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:02.749 [2024-07-22 18:38:14.664645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.749 [2024-07-22 18:38:14.664657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:02.749 [2024-07-22 18:38:14.664668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:02.749 [2024-07-22 18:38:14.664693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.749 [2024-07-22 18:38:14.664709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:02.749 [2024-07-22 18:38:14.664721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:02.749 [2024-07-22 18:38:14.664732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.749 [2024-07-22 18:38:14.664744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:02.749 [2024-07-22 18:38:14.664755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:02.749 [2024-07-22 18:38:14.664767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:02.749 [2024-07-22 18:38:14.664789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:02.749 [2024-07-22 18:38:14.664808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:02.749 [2024-07-22 18:38:14.664830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:02.749 [2024-07-22 18:38:14.664864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:02.749 [2024-07-22 18:38:14.664875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664886] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:02.749 [2024-07-22 18:38:14.664904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:02.749 [2024-07-22 18:38:14.664917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:02.749 [2024-07-22 18:38:14.664929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.749 [2024-07-22 18:38:14.664941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:02.749 [2024-07-22 18:38:14.664952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:02.749 [2024-07-22 18:38:14.664979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:02.749 [2024-07-22 18:38:14.664992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:02.749 [2024-07-22 18:38:14.665002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:02.749 [2024-07-22 18:38:14.665014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:02.749 [2024-07-22 18:38:14.665027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:02.749 [2024-07-22 18:38:14.665048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:02.749 [2024-07-22 18:38:14.665073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:02.749 [2024-07-22 18:38:14.665108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:02.749 [2024-07-22 18:38:14.665120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:02.749 [2024-07-22 18:38:14.665131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:02.749 [2024-07-22 18:38:14.665143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:02.749 [2024-07-22 18:38:14.665225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:02.749 [2024-07-22 18:38:14.665238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:02.749 [2024-07-22 18:38:14.665263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:02.749 [2024-07-22 18:38:14.665274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:02.749 [2024-07-22 18:38:14.665286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:02.749 [2024-07-22 18:38:14.665299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.749 [2024-07-22 18:38:14.665317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:02.749 [2024-07-22 18:38:14.665330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.015 ms 00:31:02.749 [2024-07-22 18:38:14.665341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.749 [2024-07-22 18:38:14.702524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.749 [2024-07-22 18:38:14.702606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:02.749 [2024-07-22 18:38:14.702629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.101 ms 00:31:02.749 [2024-07-22 18:38:14.702643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.749 [2024-07-22 18:38:14.702776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.749 [2024-07-22 18:38:14.702813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:02.749 [2024-07-22 18:38:14.702830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:02.749 [2024-07-22 18:38:14.702851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.749 [2024-07-22 18:38:14.746023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.749 [2024-07-22 18:38:14.746088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:02.749 [2024-07-22 18:38:14.746111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.069 ms 00:31:02.749 [2024-07-22 18:38:14.746124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.749 [2024-07-22 18:38:14.746206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.749 [2024-07-22 18:38:14.746230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:02.749 [2024-07-22 18:38:14.746245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:02.750 [2024-07-22 18:38:14.746257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.750 [2024-07-22 18:38:14.746429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.750 [2024-07-22 18:38:14.746449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:02.750 [2024-07-22 18:38:14.746463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:31:02.750 [2024-07-22 18:38:14.746476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.750 [2024-07-22 18:38:14.746539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.750 [2024-07-22 18:38:14.746556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:02.750 [2024-07-22 18:38:14.746575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:02.750 [2024-07-22 18:38:14.746587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.767410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.767485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:03.009 [2024-07-22 18:38:14.767520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.789 ms 00:31:03.009 [2024-07-22 18:38:14.767534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.767786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.767811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:03.009 [2024-07-22 18:38:14.767826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:03.009 [2024-07-22 18:38:14.767839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.797809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.797889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:03.009 [2024-07-22 18:38:14.797912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.937 ms 00:31:03.009 [2024-07-22 18:38:14.797926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.811513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.811580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:03.009 [2024-07-22 18:38:14.811601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.811 ms 00:31:03.009 [2024-07-22 18:38:14.811615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.891426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.891506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:03.009 [2024-07-22 18:38:14.891530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.680 ms 00:31:03.009 [2024-07-22 18:38:14.891544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.891835] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:03.009 [2024-07-22 18:38:14.891980] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:03.009 [2024-07-22 18:38:14.892118] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:03.009 [2024-07-22 18:38:14.892260] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:03.009 [2024-07-22 18:38:14.892275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.892288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:03.009 [2024-07-22 18:38:14.892303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:31:03.009 [2024-07-22 18:38:14.892316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.892433] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:03.009 [2024-07-22 18:38:14.892456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.892468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:03.009 [2024-07-22 18:38:14.892481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:03.009 [2024-07-22 18:38:14.892494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.913227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.913301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:03.009 [2024-07-22 18:38:14.913330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.696 ms 00:31:03.009 [2024-07-22 18:38:14.913344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.925763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.009 [2024-07-22 18:38:14.925828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:03.009 [2024-07-22 18:38:14.925849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:03.009 [2024-07-22 18:38:14.925862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.009 [2024-07-22 18:38:14.926207] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:03.578 [2024-07-22 18:38:15.430094] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:03.578 [2024-07-22 18:38:15.430284] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:04.145 [2024-07-22 18:38:15.932300] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:04.145 [2024-07-22 18:38:15.932440] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:04.145 [2024-07-22 18:38:15.932477] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:04.145 [2024-07-22 18:38:15.932495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.932509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:04.145 [2024-07-22 18:38:15.932527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1006.500 ms 00:31:04.145 [2024-07-22 18:38:15.932540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.932591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.932608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:04.145 [2024-07-22 18:38:15.932622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:04.145 [2024-07-22 18:38:15.932635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.947251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:04.145 [2024-07-22 18:38:15.947472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.947498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:04.145 [2024-07-22 18:38:15.947516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.812 ms 00:31:04.145 [2024-07-22 18:38:15.947529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.948333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.948358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:04.145 [2024-07-22 18:38:15.948373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:31:04.145 [2024-07-22 18:38:15.948386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.950822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.950856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:04.145 [2024-07-22 18:38:15.950871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.401 ms 00:31:04.145 [2024-07-22 18:38:15.950883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.950940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.950957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:04.145 [2024-07-22 18:38:15.950970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:04.145 [2024-07-22 18:38:15.950982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.951132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.951150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:04.145 [2024-07-22 18:38:15.951169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:04.145 [2024-07-22 18:38:15.951181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.951213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.951228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:04.145 [2024-07-22 18:38:15.951248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:04.145 [2024-07-22 18:38:15.951260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.951303] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:04.145 [2024-07-22 18:38:15.951321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.951334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:04.145 [2024-07-22 18:38:15.951346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:04.145 [2024-07-22 18:38:15.951363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.951441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.145 [2024-07-22 18:38:15.951459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:04.145 [2024-07-22 18:38:15.951473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:31:04.145 [2024-07-22 18:38:15.951485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.145 [2024-07-22 18:38:15.953008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1331.441 ms, result 0 00:31:04.145 [2024-07-22 18:38:15.968161] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.145 [2024-07-22 18:38:15.984178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:04.146 [2024-07-22 18:38:15.993945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:04.146 Validate MD5 checksum, iteration 1 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:04.146 18:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:04.146 [2024-07-22 18:38:16.129792] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:04.146 [2024-07-22 18:38:16.130210] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86866 ] 00:31:04.404 [2024-07-22 18:38:16.304442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.662 [2024-07-22 18:38:16.587958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.395  Copying: 521/1024 [MB] (521 MBps) Copying: 975/1024 [MB] (454 MBps) Copying: 1024/1024 [MB] (average 480 MBps) 00:31:09.395 00:31:09.395 18:38:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:09.395 18:38:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:11.294 Validate MD5 checksum, iteration 2 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d3425bdcadf14c2bccef9fed01c36664 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d3425bdcadf14c2bccef9fed01c36664 != \d\3\4\2\5\b\d\c\a\d\f\1\4\c\2\b\c\c\e\f\9\f\e\d\0\1\c\3\6\6\6\4 ]] 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:11.294 18:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:11.552 [2024-07-22 18:38:23.329495] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:11.552 [2024-07-22 18:38:23.329697] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86939 ] 00:31:11.552 [2024-07-22 18:38:23.502428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.810 [2024-07-22 18:38:23.790151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.043  Copying: 512/1024 [MB] (512 MBps) Copying: 987/1024 [MB] (475 MBps) Copying: 1024/1024 [MB] (average 493 MBps) 00:31:16.043 00:31:16.043 18:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:16.043 18:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bcb93bf20cefad88d6613c875f9cf872 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bcb93bf20cefad88d6613c875f9cf872 != \b\c\b\9\3\b\f\2\0\c\e\f\a\d\8\8\d\6\6\1\3\c\8\7\5\f\9\c\f\8\7\2 ]] 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86831 ]] 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86831 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86831 ']' 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86831 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86831 00:31:18.576 killing process with pid 86831 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86831' 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86831 00:31:18.576 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86831 00:31:19.513 [2024-07-22 18:38:31.353469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:19.513 [2024-07-22 18:38:31.373401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.513 [2024-07-22 18:38:31.373485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:19.513 [2024-07-22 18:38:31.373522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:19.513 [2024-07-22 18:38:31.373537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.513 [2024-07-22 18:38:31.373570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:19.513 [2024-07-22 18:38:31.377443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.513 [2024-07-22 18:38:31.377490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:19.513 [2024-07-22 18:38:31.377528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.849 ms 00:31:19.513 [2024-07-22 18:38:31.377540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.513 [2024-07-22 18:38:31.377842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.513 [2024-07-22 18:38:31.377863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:19.513 [2024-07-22 18:38:31.377878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:31:19.513 [2024-07-22 18:38:31.377890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.513 [2024-07-22 18:38:31.379209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.513 [2024-07-22 18:38:31.379251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:19.514 [2024-07-22 18:38:31.379269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.297 ms 00:31:19.514 [2024-07-22 18:38:31.379281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.380550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.380585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:19.514 [2024-07-22 18:38:31.380601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:31:19.514 [2024-07-22 18:38:31.380614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.393934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.393978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:19.514 [2024-07-22 18:38:31.393997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.268 ms 00:31:19.514 [2024-07-22 18:38:31.394017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.401135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.401177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:19.514 [2024-07-22 18:38:31.401195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.073 ms 00:31:19.514 [2024-07-22 18:38:31.401208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.401340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.401362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:19.514 [2024-07-22 18:38:31.401376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:31:19.514 [2024-07-22 18:38:31.401396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.413557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.413626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:31:19.514 [2024-07-22 18:38:31.413643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.134 ms 00:31:19.514 [2024-07-22 18:38:31.413655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.425845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.425899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:31:19.514 [2024-07-22 18:38:31.425917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.132 ms 00:31:19.514 [2024-07-22 18:38:31.425929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.438289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.438340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:19.514 [2024-07-22 18:38:31.438374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.312 ms 00:31:19.514 [2024-07-22 18:38:31.438386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.450676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.450732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:19.514 [2024-07-22 18:38:31.450751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.165 ms 00:31:19.514 [2024-07-22 18:38:31.450764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.450807] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:19.514 [2024-07-22 18:38:31.450833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:19.514 [2024-07-22 18:38:31.450850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:19.514 [2024-07-22 18:38:31.450863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:19.514 [2024-07-22 18:38:31.450876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.450992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.451020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.451033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.451045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.451058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.451070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.451102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:19.514 [2024-07-22 18:38:31.451118] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:19.514 [2024-07-22 18:38:31.451132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 377feee2-e45f-4f9c-9eef-e0241540f3df 00:31:19.514 [2024-07-22 18:38:31.451145] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:19.514 [2024-07-22 18:38:31.451157] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:19.514 [2024-07-22 18:38:31.451169] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:19.514 [2024-07-22 18:38:31.451181] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:19.514 [2024-07-22 18:38:31.451193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:19.514 [2024-07-22 18:38:31.451205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:19.514 [2024-07-22 18:38:31.451217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:19.514 [2024-07-22 18:38:31.451228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:19.514 [2024-07-22 18:38:31.451238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:19.514 [2024-07-22 18:38:31.451250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.451270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:19.514 [2024-07-22 18:38:31.451283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:31:19.514 [2024-07-22 18:38:31.451295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.468353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.468393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:19.514 [2024-07-22 18:38:31.468412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.012 ms 00:31:19.514 [2024-07-22 18:38:31.468425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.468925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.514 [2024-07-22 18:38:31.468945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:19.514 [2024-07-22 18:38:31.468959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.465 ms 00:31:19.514 [2024-07-22 18:38:31.468971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.524132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.514 [2024-07-22 18:38:31.524228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:19.514 [2024-07-22 18:38:31.524249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.514 [2024-07-22 18:38:31.524262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.524335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.514 [2024-07-22 18:38:31.524352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:19.514 [2024-07-22 18:38:31.524367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.514 [2024-07-22 18:38:31.524379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.524510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.514 [2024-07-22 18:38:31.524532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:19.514 [2024-07-22 18:38:31.524546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.514 [2024-07-22 18:38:31.524558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.514 [2024-07-22 18:38:31.524585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.514 [2024-07-22 18:38:31.524615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:19.514 [2024-07-22 18:38:31.524628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.514 [2024-07-22 18:38:31.524640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.773 [2024-07-22 18:38:31.631777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.773 [2024-07-22 18:38:31.631847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:19.773 [2024-07-22 18:38:31.631867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.773 [2024-07-22 18:38:31.631880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.773 [2024-07-22 18:38:31.722377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.774 [2024-07-22 18:38:31.722448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:19.774 [2024-07-22 18:38:31.722486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.774 [2024-07-22 18:38:31.722498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.774 [2024-07-22 18:38:31.722637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.774 [2024-07-22 18:38:31.722658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:19.774 [2024-07-22 18:38:31.722672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.774 [2024-07-22 18:38:31.722685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.774 [2024-07-22 18:38:31.722787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.774 [2024-07-22 18:38:31.722808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:19.774 [2024-07-22 18:38:31.722831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.774 [2024-07-22 18:38:31.722844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.774 [2024-07-22 18:38:31.722985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.774 [2024-07-22 18:38:31.723005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:19.774 [2024-07-22 18:38:31.723019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.774 [2024-07-22 18:38:31.723031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.774 [2024-07-22 18:38:31.723095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.774 [2024-07-22 18:38:31.723114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:19.774 [2024-07-22 18:38:31.723142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.774 [2024-07-22 18:38:31.723176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.774 [2024-07-22 18:38:31.723225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.774 [2024-07-22 18:38:31.723258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:19.774 [2024-07-22 18:38:31.723288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.774 [2024-07-22 18:38:31.723300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.774 [2024-07-22 18:38:31.723376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.774 [2024-07-22 18:38:31.723430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:19.774 [2024-07-22 18:38:31.723451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.774 [2024-07-22 18:38:31.723463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.774 [2024-07-22 18:38:31.723618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 350.176 ms, result 0 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:21.150 Remove shared memory files 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86604 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:21.150 ************************************ 00:31:21.150 END TEST ftl_upgrade_shutdown 00:31:21.150 ************************************ 00:31:21.150 00:31:21.150 real 1m35.935s 00:31:21.150 user 2m18.298s 00:31:21.150 sys 0m23.878s 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:21.150 18:38:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:21.150 18:38:33 ftl -- common/autotest_common.sh@1142 -- # return 0 00:31:21.150 18:38:33 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:21.150 18:38:33 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:21.150 18:38:33 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:31:21.150 18:38:33 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.150 18:38:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:21.150 ************************************ 00:31:21.150 START TEST ftl_restore_fast 00:31:21.150 ************************************ 00:31:21.150 18:38:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:21.490 * Looking for test storage... 00:31:21.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:21.490 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:21.490 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:21.490 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:21.490 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.H76gOwIcvf 00:31:21.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=87108 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 87108 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 87108 ']' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:21.491 18:38:33 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:21.491 [2024-07-22 18:38:33.357585] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:21.491 [2024-07-22 18:38:33.357806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87108 ] 00:31:21.760 [2024-07-22 18:38:33.533834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.019 [2024-07-22 18:38:33.777665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:22.586 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:23.153 18:38:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:23.411 18:38:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:23.411 { 00:31:23.411 "name": "nvme0n1", 00:31:23.411 "aliases": [ 00:31:23.411 "c1aafb00-fcfc-47ae-90ee-6e8dd2c64717" 00:31:23.411 ], 00:31:23.411 "product_name": "NVMe disk", 00:31:23.411 "block_size": 4096, 00:31:23.411 "num_blocks": 1310720, 00:31:23.411 "uuid": "c1aafb00-fcfc-47ae-90ee-6e8dd2c64717", 00:31:23.411 "assigned_rate_limits": { 00:31:23.411 "rw_ios_per_sec": 0, 00:31:23.411 "rw_mbytes_per_sec": 0, 00:31:23.411 "r_mbytes_per_sec": 0, 00:31:23.411 "w_mbytes_per_sec": 0 00:31:23.411 }, 00:31:23.411 "claimed": true, 00:31:23.411 "claim_type": "read_many_write_one", 00:31:23.411 "zoned": false, 00:31:23.411 "supported_io_types": { 00:31:23.411 "read": true, 00:31:23.411 "write": true, 00:31:23.411 "unmap": true, 00:31:23.411 "flush": true, 00:31:23.411 "reset": true, 00:31:23.411 "nvme_admin": true, 00:31:23.411 "nvme_io": true, 00:31:23.411 "nvme_io_md": false, 00:31:23.411 "write_zeroes": true, 00:31:23.411 "zcopy": false, 00:31:23.411 "get_zone_info": false, 00:31:23.411 "zone_management": false, 00:31:23.411 "zone_append": false, 00:31:23.411 "compare": true, 00:31:23.411 "compare_and_write": false, 00:31:23.411 "abort": true, 00:31:23.411 "seek_hole": false, 00:31:23.411 "seek_data": false, 00:31:23.411 "copy": true, 00:31:23.411 "nvme_iov_md": false 00:31:23.411 }, 00:31:23.411 "driver_specific": { 00:31:23.411 "nvme": [ 00:31:23.411 { 00:31:23.411 "pci_address": "0000:00:11.0", 00:31:23.411 "trid": { 00:31:23.411 "trtype": "PCIe", 00:31:23.411 "traddr": "0000:00:11.0" 00:31:23.411 }, 00:31:23.411 "ctrlr_data": { 00:31:23.411 "cntlid": 0, 00:31:23.411 "vendor_id": "0x1b36", 00:31:23.411 "model_number": "QEMU NVMe Ctrl", 00:31:23.411 "serial_number": "12341", 00:31:23.411 "firmware_revision": "8.0.0", 00:31:23.411 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:23.411 "oacs": { 00:31:23.411 "security": 0, 00:31:23.411 "format": 1, 00:31:23.411 "firmware": 0, 00:31:23.411 "ns_manage": 1 00:31:23.411 }, 00:31:23.411 "multi_ctrlr": false, 00:31:23.411 "ana_reporting": false 00:31:23.411 }, 00:31:23.411 "vs": { 00:31:23.411 "nvme_version": "1.4" 00:31:23.411 }, 00:31:23.411 "ns_data": { 00:31:23.411 "id": 1, 00:31:23.411 "can_share": false 00:31:23.411 } 00:31:23.411 } 00:31:23.411 ], 00:31:23.411 "mp_policy": "active_passive" 00:31:23.411 } 00:31:23.412 } 00:31:23.412 ]' 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:23.412 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:23.670 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=524944ab-6629-4540-b75e-a76eaf7a2988 00:31:23.670 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:23.670 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 524944ab-6629-4540-b75e-a76eaf7a2988 00:31:23.928 18:38:35 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:24.187 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=4f919917-383c-4ad1-b38e-df03a2aad0ae 00:31:24.187 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4f919917-383c-4ad1-b38e-df03a2aad0ae 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:24.446 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:24.705 { 00:31:24.705 "name": "8a2afdb2-bb00-4dde-941e-2295fdfe6cae", 00:31:24.705 "aliases": [ 00:31:24.705 "lvs/nvme0n1p0" 00:31:24.705 ], 00:31:24.705 "product_name": "Logical Volume", 00:31:24.705 "block_size": 4096, 00:31:24.705 "num_blocks": 26476544, 00:31:24.705 "uuid": "8a2afdb2-bb00-4dde-941e-2295fdfe6cae", 00:31:24.705 "assigned_rate_limits": { 00:31:24.705 "rw_ios_per_sec": 0, 00:31:24.705 "rw_mbytes_per_sec": 0, 00:31:24.705 "r_mbytes_per_sec": 0, 00:31:24.705 "w_mbytes_per_sec": 0 00:31:24.705 }, 00:31:24.705 "claimed": false, 00:31:24.705 "zoned": false, 00:31:24.705 "supported_io_types": { 00:31:24.705 "read": true, 00:31:24.705 "write": true, 00:31:24.705 "unmap": true, 00:31:24.705 "flush": false, 00:31:24.705 "reset": true, 00:31:24.705 "nvme_admin": false, 00:31:24.705 "nvme_io": false, 00:31:24.705 "nvme_io_md": false, 00:31:24.705 "write_zeroes": true, 00:31:24.705 "zcopy": false, 00:31:24.705 "get_zone_info": false, 00:31:24.705 "zone_management": false, 00:31:24.705 "zone_append": false, 00:31:24.705 "compare": false, 00:31:24.705 "compare_and_write": false, 00:31:24.705 "abort": false, 00:31:24.705 "seek_hole": true, 00:31:24.705 "seek_data": true, 00:31:24.705 "copy": false, 00:31:24.705 "nvme_iov_md": false 00:31:24.705 }, 00:31:24.705 "driver_specific": { 00:31:24.705 "lvol": { 00:31:24.705 "lvol_store_uuid": "4f919917-383c-4ad1-b38e-df03a2aad0ae", 00:31:24.705 "base_bdev": "nvme0n1", 00:31:24.705 "thin_provision": true, 00:31:24.705 "num_allocated_clusters": 0, 00:31:24.705 "snapshot": false, 00:31:24.705 "clone": false, 00:31:24.705 "esnap_clone": false 00:31:24.705 } 00:31:24.705 } 00:31:24.705 } 00:31:24.705 ]' 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:24.705 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:25.274 18:38:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:25.274 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:25.274 { 00:31:25.274 "name": "8a2afdb2-bb00-4dde-941e-2295fdfe6cae", 00:31:25.274 "aliases": [ 00:31:25.274 "lvs/nvme0n1p0" 00:31:25.274 ], 00:31:25.274 "product_name": "Logical Volume", 00:31:25.274 "block_size": 4096, 00:31:25.274 "num_blocks": 26476544, 00:31:25.274 "uuid": "8a2afdb2-bb00-4dde-941e-2295fdfe6cae", 00:31:25.274 "assigned_rate_limits": { 00:31:25.274 "rw_ios_per_sec": 0, 00:31:25.274 "rw_mbytes_per_sec": 0, 00:31:25.274 "r_mbytes_per_sec": 0, 00:31:25.274 "w_mbytes_per_sec": 0 00:31:25.274 }, 00:31:25.274 "claimed": false, 00:31:25.274 "zoned": false, 00:31:25.274 "supported_io_types": { 00:31:25.274 "read": true, 00:31:25.274 "write": true, 00:31:25.274 "unmap": true, 00:31:25.274 "flush": false, 00:31:25.274 "reset": true, 00:31:25.274 "nvme_admin": false, 00:31:25.274 "nvme_io": false, 00:31:25.274 "nvme_io_md": false, 00:31:25.274 "write_zeroes": true, 00:31:25.274 "zcopy": false, 00:31:25.274 "get_zone_info": false, 00:31:25.274 "zone_management": false, 00:31:25.274 "zone_append": false, 00:31:25.274 "compare": false, 00:31:25.274 "compare_and_write": false, 00:31:25.274 "abort": false, 00:31:25.274 "seek_hole": true, 00:31:25.274 "seek_data": true, 00:31:25.274 "copy": false, 00:31:25.274 "nvme_iov_md": false 00:31:25.274 }, 00:31:25.274 "driver_specific": { 00:31:25.274 "lvol": { 00:31:25.274 "lvol_store_uuid": "4f919917-383c-4ad1-b38e-df03a2aad0ae", 00:31:25.274 "base_bdev": "nvme0n1", 00:31:25.274 "thin_provision": true, 00:31:25.274 "num_allocated_clusters": 0, 00:31:25.274 "snapshot": false, 00:31:25.274 "clone": false, 00:31:25.274 "esnap_clone": false 00:31:25.274 } 00:31:25.274 } 00:31:25.274 } 00:31:25.274 ]' 00:31:25.274 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:25.533 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:25.533 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:25.533 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:25.533 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:25.533 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:25.533 18:38:37 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:25.533 18:38:37 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:25.792 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:25.792 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:25.792 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:25.792 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:25.792 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:25.792 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:25.792 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a2afdb2-bb00-4dde-941e-2295fdfe6cae 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:26.051 { 00:31:26.051 "name": "8a2afdb2-bb00-4dde-941e-2295fdfe6cae", 00:31:26.051 "aliases": [ 00:31:26.051 "lvs/nvme0n1p0" 00:31:26.051 ], 00:31:26.051 "product_name": "Logical Volume", 00:31:26.051 "block_size": 4096, 00:31:26.051 "num_blocks": 26476544, 00:31:26.051 "uuid": "8a2afdb2-bb00-4dde-941e-2295fdfe6cae", 00:31:26.051 "assigned_rate_limits": { 00:31:26.051 "rw_ios_per_sec": 0, 00:31:26.051 "rw_mbytes_per_sec": 0, 00:31:26.051 "r_mbytes_per_sec": 0, 00:31:26.051 "w_mbytes_per_sec": 0 00:31:26.051 }, 00:31:26.051 "claimed": false, 00:31:26.051 "zoned": false, 00:31:26.051 "supported_io_types": { 00:31:26.051 "read": true, 00:31:26.051 "write": true, 00:31:26.051 "unmap": true, 00:31:26.051 "flush": false, 00:31:26.051 "reset": true, 00:31:26.051 "nvme_admin": false, 00:31:26.051 "nvme_io": false, 00:31:26.051 "nvme_io_md": false, 00:31:26.051 "write_zeroes": true, 00:31:26.051 "zcopy": false, 00:31:26.051 "get_zone_info": false, 00:31:26.051 "zone_management": false, 00:31:26.051 "zone_append": false, 00:31:26.051 "compare": false, 00:31:26.051 "compare_and_write": false, 00:31:26.051 "abort": false, 00:31:26.051 "seek_hole": true, 00:31:26.051 "seek_data": true, 00:31:26.051 "copy": false, 00:31:26.051 "nvme_iov_md": false 00:31:26.051 }, 00:31:26.051 "driver_specific": { 00:31:26.051 "lvol": { 00:31:26.051 "lvol_store_uuid": "4f919917-383c-4ad1-b38e-df03a2aad0ae", 00:31:26.051 "base_bdev": "nvme0n1", 00:31:26.051 "thin_provision": true, 00:31:26.051 "num_allocated_clusters": 0, 00:31:26.051 "snapshot": false, 00:31:26.051 "clone": false, 00:31:26.051 "esnap_clone": false 00:31:26.051 } 00:31:26.051 } 00:31:26.051 } 00:31:26.051 ]' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8a2afdb2-bb00-4dde-941e-2295fdfe6cae --l2p_dram_limit 10' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:26.051 18:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8a2afdb2-bb00-4dde-941e-2295fdfe6cae --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:26.312 [2024-07-22 18:38:38.160778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.161177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:26.312 [2024-07-22 18:38:38.161350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:26.312 [2024-07-22 18:38:38.161421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.161714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.161751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:26.312 [2024-07-22 18:38:38.161771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:31:26.312 [2024-07-22 18:38:38.161791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.161832] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:26.312 [2024-07-22 18:38:38.162948] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:26.312 [2024-07-22 18:38:38.162996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.163025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:26.312 [2024-07-22 18:38:38.163044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:31:26.312 [2024-07-22 18:38:38.163062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.163234] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b92e00d0-5341-470f-9738-7edc0d43ccba 00:31:26.312 [2024-07-22 18:38:38.165721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.165767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:26.312 [2024-07-22 18:38:38.165794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:26.312 [2024-07-22 18:38:38.165811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.179157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.179248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:26.312 [2024-07-22 18:38:38.179280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.211 ms 00:31:26.312 [2024-07-22 18:38:38.179296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.179521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.179555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:26.312 [2024-07-22 18:38:38.179578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:31:26.312 [2024-07-22 18:38:38.179595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.179777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.179819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:26.312 [2024-07-22 18:38:38.179843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:26.312 [2024-07-22 18:38:38.179866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.179920] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:26.312 [2024-07-22 18:38:38.185863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.185920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:26.312 [2024-07-22 18:38:38.185942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.964 ms 00:31:26.312 [2024-07-22 18:38:38.185962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.186026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.186053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:26.312 [2024-07-22 18:38:38.186072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:26.312 [2024-07-22 18:38:38.186091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.186156] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:26.312 [2024-07-22 18:38:38.186347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:26.312 [2024-07-22 18:38:38.186372] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:26.312 [2024-07-22 18:38:38.186401] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:26.312 [2024-07-22 18:38:38.186422] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:26.312 [2024-07-22 18:38:38.186444] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:26.312 [2024-07-22 18:38:38.186461] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:26.312 [2024-07-22 18:38:38.186479] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:26.312 [2024-07-22 18:38:38.186498] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:26.312 [2024-07-22 18:38:38.186519] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:26.312 [2024-07-22 18:38:38.186536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.186555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:26.312 [2024-07-22 18:38:38.186573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:31:26.312 [2024-07-22 18:38:38.186592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.186714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.312 [2024-07-22 18:38:38.186743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:26.312 [2024-07-22 18:38:38.186761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:26.312 [2024-07-22 18:38:38.186780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.312 [2024-07-22 18:38:38.186919] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:26.312 [2024-07-22 18:38:38.186950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:26.312 [2024-07-22 18:38:38.186987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:26.312 [2024-07-22 18:38:38.187009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:26.312 [2024-07-22 18:38:38.187045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:26.312 [2024-07-22 18:38:38.187079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:26.312 [2024-07-22 18:38:38.187094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:26.312 [2024-07-22 18:38:38.187127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:26.312 [2024-07-22 18:38:38.187147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:26.312 [2024-07-22 18:38:38.187162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:26.312 [2024-07-22 18:38:38.187182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:26.312 [2024-07-22 18:38:38.187197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:26.312 [2024-07-22 18:38:38.187215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:26.312 [2024-07-22 18:38:38.187252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:26.312 [2024-07-22 18:38:38.187267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:26.312 [2024-07-22 18:38:38.187300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:26.312 [2024-07-22 18:38:38.187332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:26.312 [2024-07-22 18:38:38.187350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:26.312 [2024-07-22 18:38:38.187397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:26.312 [2024-07-22 18:38:38.187417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:26.312 [2024-07-22 18:38:38.187437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:26.312 [2024-07-22 18:38:38.187462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:26.313 [2024-07-22 18:38:38.187480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:26.313 [2024-07-22 18:38:38.187495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:26.313 [2024-07-22 18:38:38.187513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:26.313 [2024-07-22 18:38:38.187528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:26.313 [2024-07-22 18:38:38.187549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:26.313 [2024-07-22 18:38:38.187564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:26.313 [2024-07-22 18:38:38.187582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:26.313 [2024-07-22 18:38:38.187597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:26.313 [2024-07-22 18:38:38.187614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:26.313 [2024-07-22 18:38:38.187630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:26.313 [2024-07-22 18:38:38.187650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.313 [2024-07-22 18:38:38.187664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:26.313 [2024-07-22 18:38:38.187699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:26.313 [2024-07-22 18:38:38.187719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.313 [2024-07-22 18:38:38.187739] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:26.313 [2024-07-22 18:38:38.187756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:26.313 [2024-07-22 18:38:38.187788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:26.313 [2024-07-22 18:38:38.187812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.313 [2024-07-22 18:38:38.187832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:26.313 [2024-07-22 18:38:38.187847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:26.313 [2024-07-22 18:38:38.187868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:26.313 [2024-07-22 18:38:38.187883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:26.313 [2024-07-22 18:38:38.187901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:26.313 [2024-07-22 18:38:38.187916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:26.313 [2024-07-22 18:38:38.187939] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:26.313 [2024-07-22 18:38:38.187958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:26.313 [2024-07-22 18:38:38.187983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:26.313 [2024-07-22 18:38:38.187999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:26.313 [2024-07-22 18:38:38.188017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:26.313 [2024-07-22 18:38:38.188033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:26.313 [2024-07-22 18:38:38.188051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:26.313 [2024-07-22 18:38:38.188066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:26.313 [2024-07-22 18:38:38.188084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:26.313 [2024-07-22 18:38:38.188099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:26.313 [2024-07-22 18:38:38.188118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:26.313 [2024-07-22 18:38:38.188134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:26.313 [2024-07-22 18:38:38.188156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:26.313 [2024-07-22 18:38:38.188171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:26.313 [2024-07-22 18:38:38.188190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:26.313 [2024-07-22 18:38:38.188206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:26.313 [2024-07-22 18:38:38.188225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:26.313 [2024-07-22 18:38:38.188242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:26.313 [2024-07-22 18:38:38.188262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:26.313 [2024-07-22 18:38:38.188278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:26.313 [2024-07-22 18:38:38.188297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:26.313 [2024-07-22 18:38:38.188312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:26.313 [2024-07-22 18:38:38.188335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.313 [2024-07-22 18:38:38.188351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:26.313 [2024-07-22 18:38:38.188371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:31:26.313 [2024-07-22 18:38:38.188386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.313 [2024-07-22 18:38:38.188462] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:26.313 [2024-07-22 18:38:38.188492] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:28.845 [2024-07-22 18:38:40.561357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.561479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:28.845 [2024-07-22 18:38:40.561512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2372.875 ms 00:31:28.845 [2024-07-22 18:38:40.561529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.606312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.606403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:28.845 [2024-07-22 18:38:40.606435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.441 ms 00:31:28.845 [2024-07-22 18:38:40.606452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.606718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.606746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:28.845 [2024-07-22 18:38:40.606770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:31:28.845 [2024-07-22 18:38:40.606792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.654467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.654552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:28.845 [2024-07-22 18:38:40.654582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.595 ms 00:31:28.845 [2024-07-22 18:38:40.654599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.654704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.654739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:28.845 [2024-07-22 18:38:40.654761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:28.845 [2024-07-22 18:38:40.654776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.655645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.655699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:28.845 [2024-07-22 18:38:40.655727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:31:28.845 [2024-07-22 18:38:40.655743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.655939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.655985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:28.845 [2024-07-22 18:38:40.656012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:31:28.845 [2024-07-22 18:38:40.656027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.679658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.679760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:28.845 [2024-07-22 18:38:40.679791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.588 ms 00:31:28.845 [2024-07-22 18:38:40.679808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.695722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:28.845 [2024-07-22 18:38:40.701062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.701110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:28.845 [2024-07-22 18:38:40.701136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.074 ms 00:31:28.845 [2024-07-22 18:38:40.701156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.779274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.779414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:28.845 [2024-07-22 18:38:40.779446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.034 ms 00:31:28.845 [2024-07-22 18:38:40.779467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.779835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.779883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:28.845 [2024-07-22 18:38:40.779909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:31:28.845 [2024-07-22 18:38:40.779947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.814940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.815059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:28.845 [2024-07-22 18:38:40.815086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.853 ms 00:31:28.845 [2024-07-22 18:38:40.815107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.845750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.845817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:28.845 [2024-07-22 18:38:40.845842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.567 ms 00:31:28.845 [2024-07-22 18:38:40.845862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.845 [2024-07-22 18:38:40.846812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.845 [2024-07-22 18:38:40.846859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:28.845 [2024-07-22 18:38:40.846881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:31:28.845 [2024-07-22 18:38:40.846907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.104 [2024-07-22 18:38:40.945470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.104 [2024-07-22 18:38:40.945601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:29.104 [2024-07-22 18:38:40.945630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.480 ms 00:31:29.104 [2024-07-22 18:38:40.945657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.104 [2024-07-22 18:38:40.979414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.104 [2024-07-22 18:38:40.979513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:29.104 [2024-07-22 18:38:40.979541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.667 ms 00:31:29.104 [2024-07-22 18:38:40.979562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.104 [2024-07-22 18:38:41.011186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.104 [2024-07-22 18:38:41.011269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:29.104 [2024-07-22 18:38:41.011294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.551 ms 00:31:29.104 [2024-07-22 18:38:41.011313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.104 [2024-07-22 18:38:41.043313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.104 [2024-07-22 18:38:41.043615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:29.104 [2024-07-22 18:38:41.043780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.932 ms 00:31:29.105 [2024-07-22 18:38:41.043818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.105 [2024-07-22 18:38:41.043911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.105 [2024-07-22 18:38:41.043944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:29.105 [2024-07-22 18:38:41.043972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:29.105 [2024-07-22 18:38:41.043996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.105 [2024-07-22 18:38:41.044146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.105 [2024-07-22 18:38:41.044177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:29.105 [2024-07-22 18:38:41.044199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:29.105 [2024-07-22 18:38:41.044219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.105 [2024-07-22 18:38:41.045858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2884.474 ms, result 0 00:31:29.105 { 00:31:29.105 "name": "ftl0", 00:31:29.105 "uuid": "b92e00d0-5341-470f-9738-7edc0d43ccba" 00:31:29.105 } 00:31:29.105 18:38:41 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:29.105 18:38:41 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:29.386 18:38:41 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:29.386 18:38:41 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:29.644 [2024-07-22 18:38:41.645057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.644 [2024-07-22 18:38:41.645149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:29.644 [2024-07-22 18:38:41.645177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:29.644 [2024-07-22 18:38:41.645192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.644 [2024-07-22 18:38:41.645238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:29.644 [2024-07-22 18:38:41.649123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.644 [2024-07-22 18:38:41.649167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:29.644 [2024-07-22 18:38:41.649186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.857 ms 00:31:29.644 [2024-07-22 18:38:41.649202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.644 [2024-07-22 18:38:41.649554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.644 [2024-07-22 18:38:41.649596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:29.644 [2024-07-22 18:38:41.649625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:31:29.644 [2024-07-22 18:38:41.649642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.644 [2024-07-22 18:38:41.652872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.644 [2024-07-22 18:38:41.652908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:29.644 [2024-07-22 18:38:41.652924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:31:29.644 [2024-07-22 18:38:41.652939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.659587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.659631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:29.904 [2024-07-22 18:38:41.659650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.620 ms 00:31:29.904 [2024-07-22 18:38:41.659666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.691965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.692034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:29.904 [2024-07-22 18:38:41.692057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.155 ms 00:31:29.904 [2024-07-22 18:38:41.692080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.711077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.711151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:29.904 [2024-07-22 18:38:41.711174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.927 ms 00:31:29.904 [2024-07-22 18:38:41.711190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.711436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.711465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:29.904 [2024-07-22 18:38:41.711482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:31:29.904 [2024-07-22 18:38:41.711498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.743060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.743129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:29.904 [2024-07-22 18:38:41.743151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.532 ms 00:31:29.904 [2024-07-22 18:38:41.743167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.773888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.773980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:29.904 [2024-07-22 18:38:41.774002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.638 ms 00:31:29.904 [2024-07-22 18:38:41.774018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.804783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.804849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:29.904 [2024-07-22 18:38:41.804870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.684 ms 00:31:29.904 [2024-07-22 18:38:41.804894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.835712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.904 [2024-07-22 18:38:41.835780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:29.904 [2024-07-22 18:38:41.835801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.660 ms 00:31:29.904 [2024-07-22 18:38:41.835817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.904 [2024-07-22 18:38:41.835891] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:29.904 [2024-07-22 18:38:41.835924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:29.904 [2024-07-22 18:38:41.835941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:29.904 [2024-07-22 18:38:41.835958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:29.904 [2024-07-22 18:38:41.835971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:29.904 [2024-07-22 18:38:41.835987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:29.904 [2024-07-22 18:38:41.836001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:29.904 [2024-07-22 18:38:41.836017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.836995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:29.905 [2024-07-22 18:38:41.837264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:29.906 [2024-07-22 18:38:41.837471] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:29.906 [2024-07-22 18:38:41.837488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b92e00d0-5341-470f-9738-7edc0d43ccba 00:31:29.906 [2024-07-22 18:38:41.837504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:29.906 [2024-07-22 18:38:41.837517] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:29.906 [2024-07-22 18:38:41.837534] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:29.906 [2024-07-22 18:38:41.837548] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:29.906 [2024-07-22 18:38:41.837563] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:29.906 [2024-07-22 18:38:41.837576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:29.906 [2024-07-22 18:38:41.837591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:29.906 [2024-07-22 18:38:41.837603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:29.906 [2024-07-22 18:38:41.837617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:29.906 [2024-07-22 18:38:41.837629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.906 [2024-07-22 18:38:41.837644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:29.906 [2024-07-22 18:38:41.837659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:31:29.906 [2024-07-22 18:38:41.837674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.906 [2024-07-22 18:38:41.854872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.906 [2024-07-22 18:38:41.854932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:29.906 [2024-07-22 18:38:41.854952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.090 ms 00:31:29.906 [2024-07-22 18:38:41.854968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.906 [2024-07-22 18:38:41.855473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.906 [2024-07-22 18:38:41.855512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:29.906 [2024-07-22 18:38:41.855528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:31:29.906 [2024-07-22 18:38:41.855550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.906 [2024-07-22 18:38:41.909075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.906 [2024-07-22 18:38:41.909139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:29.906 [2024-07-22 18:38:41.909160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.906 [2024-07-22 18:38:41.909176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.906 [2024-07-22 18:38:41.909276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.906 [2024-07-22 18:38:41.909299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:29.906 [2024-07-22 18:38:41.909314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.906 [2024-07-22 18:38:41.909333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.906 [2024-07-22 18:38:41.909469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.906 [2024-07-22 18:38:41.909497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:29.906 [2024-07-22 18:38:41.909512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.906 [2024-07-22 18:38:41.909527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.906 [2024-07-22 18:38:41.909557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.906 [2024-07-22 18:38:41.909580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:29.906 [2024-07-22 18:38:41.909594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.906 [2024-07-22 18:38:41.909609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.164 [2024-07-22 18:38:42.019237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.164 [2024-07-22 18:38:42.019317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:30.164 [2024-07-22 18:38:42.019339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.164 [2024-07-22 18:38:42.019355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.164 [2024-07-22 18:38:42.106169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.164 [2024-07-22 18:38:42.106255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:30.164 [2024-07-22 18:38:42.106277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.164 [2024-07-22 18:38:42.106298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.164 [2024-07-22 18:38:42.106431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.164 [2024-07-22 18:38:42.106458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:30.165 [2024-07-22 18:38:42.106473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.165 [2024-07-22 18:38:42.106489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.165 [2024-07-22 18:38:42.106561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.165 [2024-07-22 18:38:42.106588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:30.165 [2024-07-22 18:38:42.106603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.165 [2024-07-22 18:38:42.106618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.165 [2024-07-22 18:38:42.106783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.165 [2024-07-22 18:38:42.106809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:30.165 [2024-07-22 18:38:42.106823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.165 [2024-07-22 18:38:42.106838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.165 [2024-07-22 18:38:42.106904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.165 [2024-07-22 18:38:42.106929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:30.165 [2024-07-22 18:38:42.106944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.165 [2024-07-22 18:38:42.106959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.165 [2024-07-22 18:38:42.107017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.165 [2024-07-22 18:38:42.107037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:30.165 [2024-07-22 18:38:42.107051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.165 [2024-07-22 18:38:42.107066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.165 [2024-07-22 18:38:42.107126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.165 [2024-07-22 18:38:42.107152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:30.165 [2024-07-22 18:38:42.107166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.165 [2024-07-22 18:38:42.107182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.165 [2024-07-22 18:38:42.107356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 462.260 ms, result 0 00:31:30.165 true 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 87108 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 87108 ']' 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 87108 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87108 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:30.165 killing process with pid 87108 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87108' 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 87108 00:31:30.165 18:38:42 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 87108 00:31:35.429 18:38:47 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:40.703 262144+0 records in 00:31:40.703 262144+0 records out 00:31:40.703 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.69619 s, 229 MB/s 00:31:40.703 18:38:51 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:42.078 18:38:53 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:42.078 [2024-07-22 18:38:53.982293] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:42.078 [2024-07-22 18:38:53.982482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87337 ] 00:31:42.336 [2024-07-22 18:38:54.157111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.595 [2024-07-22 18:38:54.423886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.854 [2024-07-22 18:38:54.771457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:42.854 [2024-07-22 18:38:54.771533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.113 [2024-07-22 18:38:54.935299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.935376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:43.113 [2024-07-22 18:38:54.935424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:43.113 [2024-07-22 18:38:54.935438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.935514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.935536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.113 [2024-07-22 18:38:54.935549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:43.113 [2024-07-22 18:38:54.935565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.935597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:43.113 [2024-07-22 18:38:54.936495] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:43.113 [2024-07-22 18:38:54.936529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.936547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.113 [2024-07-22 18:38:54.936560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:31:43.113 [2024-07-22 18:38:54.936572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.938477] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:43.113 [2024-07-22 18:38:54.955287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.955329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:43.113 [2024-07-22 18:38:54.955347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.811 ms 00:31:43.113 [2024-07-22 18:38:54.955360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.955445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.955466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:43.113 [2024-07-22 18:38:54.955484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:43.113 [2024-07-22 18:38:54.955496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.964533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.964592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.113 [2024-07-22 18:38:54.964608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.942 ms 00:31:43.113 [2024-07-22 18:38:54.964619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.964756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.964781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.113 [2024-07-22 18:38:54.964795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:31:43.113 [2024-07-22 18:38:54.964808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.964874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.964893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:43.113 [2024-07-22 18:38:54.964907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:43.113 [2024-07-22 18:38:54.964919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.964957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:43.113 [2024-07-22 18:38:54.970108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.970172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.113 [2024-07-22 18:38:54.970187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:31:43.113 [2024-07-22 18:38:54.970199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.113 [2024-07-22 18:38:54.970247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.113 [2024-07-22 18:38:54.970264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:43.114 [2024-07-22 18:38:54.970276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:43.114 [2024-07-22 18:38:54.970287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.114 [2024-07-22 18:38:54.970354] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:43.114 [2024-07-22 18:38:54.970414] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:43.114 [2024-07-22 18:38:54.970461] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:43.114 [2024-07-22 18:38:54.970487] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:43.114 [2024-07-22 18:38:54.970594] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:43.114 [2024-07-22 18:38:54.970617] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:43.114 [2024-07-22 18:38:54.970633] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:43.114 [2024-07-22 18:38:54.970650] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:43.114 [2024-07-22 18:38:54.970664] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:43.114 [2024-07-22 18:38:54.970690] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:43.114 [2024-07-22 18:38:54.970705] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:43.114 [2024-07-22 18:38:54.970728] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:43.114 [2024-07-22 18:38:54.970739] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:43.114 [2024-07-22 18:38:54.970752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.114 [2024-07-22 18:38:54.970769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:43.114 [2024-07-22 18:38:54.970782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:31:43.114 [2024-07-22 18:38:54.970794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.114 [2024-07-22 18:38:54.970890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.114 [2024-07-22 18:38:54.970908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:43.114 [2024-07-22 18:38:54.970921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:43.114 [2024-07-22 18:38:54.970932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.114 [2024-07-22 18:38:54.971039] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:43.114 [2024-07-22 18:38:54.971056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:43.114 [2024-07-22 18:38:54.971075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:43.114 [2024-07-22 18:38:54.971111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:43.114 [2024-07-22 18:38:54.971146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.114 [2024-07-22 18:38:54.971169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:43.114 [2024-07-22 18:38:54.971182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:43.114 [2024-07-22 18:38:54.971193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.114 [2024-07-22 18:38:54.971204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:43.114 [2024-07-22 18:38:54.971216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:43.114 [2024-07-22 18:38:54.971227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:43.114 [2024-07-22 18:38:54.971250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:43.114 [2024-07-22 18:38:54.971310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:43.114 [2024-07-22 18:38:54.971343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:43.114 [2024-07-22 18:38:54.971375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:43.114 [2024-07-22 18:38:54.971440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:43.114 [2024-07-22 18:38:54.971473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.114 [2024-07-22 18:38:54.971496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:43.114 [2024-07-22 18:38:54.971507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:43.114 [2024-07-22 18:38:54.971518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.114 [2024-07-22 18:38:54.971529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:43.114 [2024-07-22 18:38:54.971541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:43.114 [2024-07-22 18:38:54.971552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:43.114 [2024-07-22 18:38:54.971574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:43.114 [2024-07-22 18:38:54.971586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971597] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:43.114 [2024-07-22 18:38:54.971610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:43.114 [2024-07-22 18:38:54.971621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.114 [2024-07-22 18:38:54.971646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:43.114 [2024-07-22 18:38:54.971657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:43.114 [2024-07-22 18:38:54.971669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:43.114 [2024-07-22 18:38:54.971680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:43.114 [2024-07-22 18:38:54.971691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:43.114 [2024-07-22 18:38:54.971718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:43.114 [2024-07-22 18:38:54.971733] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:43.114 [2024-07-22 18:38:54.971749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.114 [2024-07-22 18:38:54.971764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:43.114 [2024-07-22 18:38:54.971777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:43.114 [2024-07-22 18:38:54.971789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:43.114 [2024-07-22 18:38:54.971813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:43.114 [2024-07-22 18:38:54.971825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:43.114 [2024-07-22 18:38:54.971838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:43.114 [2024-07-22 18:38:54.971850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:43.114 [2024-07-22 18:38:54.971863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:43.114 [2024-07-22 18:38:54.971875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:43.114 [2024-07-22 18:38:54.971888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:43.114 [2024-07-22 18:38:54.971901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:43.114 [2024-07-22 18:38:54.971913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:43.114 [2024-07-22 18:38:54.971926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:43.114 [2024-07-22 18:38:54.971939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:43.114 [2024-07-22 18:38:54.971951] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:43.114 [2024-07-22 18:38:54.971965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.114 [2024-07-22 18:38:54.971979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:43.114 [2024-07-22 18:38:54.971992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:43.115 [2024-07-22 18:38:54.972004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:43.115 [2024-07-22 18:38:54.972034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:43.115 [2024-07-22 18:38:54.972048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:54.972066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:43.115 [2024-07-22 18:38:54.972078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:31:43.115 [2024-07-22 18:38:54.972089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.018139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.018219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.115 [2024-07-22 18:38:55.018240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.982 ms 00:31:43.115 [2024-07-22 18:38:55.018253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.018382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.018400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:43.115 [2024-07-22 18:38:55.018414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:43.115 [2024-07-22 18:38:55.018425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.061912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.061970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.115 [2024-07-22 18:38:55.061990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.390 ms 00:31:43.115 [2024-07-22 18:38:55.062002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.062093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.062110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.115 [2024-07-22 18:38:55.062124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:43.115 [2024-07-22 18:38:55.062135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.062793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.062819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.115 [2024-07-22 18:38:55.062833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:31:43.115 [2024-07-22 18:38:55.062845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.063026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.063047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.115 [2024-07-22 18:38:55.063061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:31:43.115 [2024-07-22 18:38:55.063072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.081484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.081542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.115 [2024-07-22 18:38:55.081559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.383 ms 00:31:43.115 [2024-07-22 18:38:55.081587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.115 [2024-07-22 18:38:55.098776] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:43.115 [2024-07-22 18:38:55.098820] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:43.115 [2024-07-22 18:38:55.098843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.115 [2024-07-22 18:38:55.098856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:43.115 [2024-07-22 18:38:55.098869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.075 ms 00:31:43.115 [2024-07-22 18:38:55.098881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.373 [2024-07-22 18:38:55.128304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.373 [2024-07-22 18:38:55.128385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:43.373 [2024-07-22 18:38:55.128404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.368 ms 00:31:43.373 [2024-07-22 18:38:55.128429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.373 [2024-07-22 18:38:55.144717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.373 [2024-07-22 18:38:55.144760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:43.373 [2024-07-22 18:38:55.144778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.199 ms 00:31:43.373 [2024-07-22 18:38:55.144791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.373 [2024-07-22 18:38:55.160018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.373 [2024-07-22 18:38:55.160089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:43.373 [2024-07-22 18:38:55.160106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.176 ms 00:31:43.373 [2024-07-22 18:38:55.160117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.373 [2024-07-22 18:38:55.161071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.373 [2024-07-22 18:38:55.161106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:43.373 [2024-07-22 18:38:55.161122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:31:43.373 [2024-07-22 18:38:55.161134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.373 [2024-07-22 18:38:55.239176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.373 [2024-07-22 18:38:55.239262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:43.373 [2024-07-22 18:38:55.239283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.008 ms 00:31:43.373 [2024-07-22 18:38:55.239296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.373 [2024-07-22 18:38:55.252121] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:43.373 [2024-07-22 18:38:55.256193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.373 [2024-07-22 18:38:55.256236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:43.373 [2024-07-22 18:38:55.256256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:31:43.373 [2024-07-22 18:38:55.256268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.373 [2024-07-22 18:38:55.256392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.373 [2024-07-22 18:38:55.256414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:43.373 [2024-07-22 18:38:55.256428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:43.374 [2024-07-22 18:38:55.256439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.374 [2024-07-22 18:38:55.256540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.374 [2024-07-22 18:38:55.256565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:43.374 [2024-07-22 18:38:55.256585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:43.374 [2024-07-22 18:38:55.256597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.374 [2024-07-22 18:38:55.256633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.374 [2024-07-22 18:38:55.256650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:43.374 [2024-07-22 18:38:55.256663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:43.374 [2024-07-22 18:38:55.256675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.374 [2024-07-22 18:38:55.256733] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:43.374 [2024-07-22 18:38:55.256752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.374 [2024-07-22 18:38:55.256764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:43.374 [2024-07-22 18:38:55.256777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:43.374 [2024-07-22 18:38:55.256794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.374 [2024-07-22 18:38:55.288347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.374 [2024-07-22 18:38:55.288395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:43.374 [2024-07-22 18:38:55.288414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.521 ms 00:31:43.374 [2024-07-22 18:38:55.288427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.374 [2024-07-22 18:38:55.288517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.374 [2024-07-22 18:38:55.288538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:43.374 [2024-07-22 18:38:55.288563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:43.374 [2024-07-22 18:38:55.288574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.374 [2024-07-22 18:38:55.289961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.125 ms, result 0 00:32:21.043  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 103/1024 [MB] (25 MBps) Copying: 129/1024 [MB] (26 MBps) Copying: 155/1024 [MB] (25 MBps) Copying: 181/1024 [MB] (25 MBps) Copying: 209/1024 [MB] (27 MBps) Copying: 236/1024 [MB] (27 MBps) Copying: 262/1024 [MB] (26 MBps) Copying: 289/1024 [MB] (26 MBps) Copying: 315/1024 [MB] (26 MBps) Copying: 343/1024 [MB] (27 MBps) Copying: 369/1024 [MB] (26 MBps) Copying: 397/1024 [MB] (27 MBps) Copying: 422/1024 [MB] (25 MBps) Copying: 450/1024 [MB] (27 MBps) Copying: 478/1024 [MB] (27 MBps) Copying: 505/1024 [MB] (27 MBps) Copying: 532/1024 [MB] (27 MBps) Copying: 560/1024 [MB] (27 MBps) Copying: 589/1024 [MB] (28 MBps) Copying: 617/1024 [MB] (28 MBps) Copying: 645/1024 [MB] (27 MBps) Copying: 673/1024 [MB] (27 MBps) Copying: 701/1024 [MB] (28 MBps) Copying: 730/1024 [MB] (28 MBps) Copying: 758/1024 [MB] (27 MBps) Copying: 785/1024 [MB] (27 MBps) Copying: 813/1024 [MB] (27 MBps) Copying: 841/1024 [MB] (27 MBps) Copying: 868/1024 [MB] (27 MBps) Copying: 897/1024 [MB] (28 MBps) Copying: 925/1024 [MB] (28 MBps) Copying: 952/1024 [MB] (27 MBps) Copying: 979/1024 [MB] (26 MBps) Copying: 1006/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-22 18:39:32.941534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.043 [2024-07-22 18:39:32.941756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:21.043 [2024-07-22 18:39:32.941898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:21.043 [2024-07-22 18:39:32.941951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.043 [2024-07-22 18:39:32.941995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:21.043 [2024-07-22 18:39:32.945622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.043 [2024-07-22 18:39:32.945658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:21.043 [2024-07-22 18:39:32.945676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.601 ms 00:32:21.043 [2024-07-22 18:39:32.945701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.043 [2024-07-22 18:39:32.947491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.043 [2024-07-22 18:39:32.947544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:21.043 [2024-07-22 18:39:32.947561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:32:21.043 [2024-07-22 18:39:32.947576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.043 [2024-07-22 18:39:32.947615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.043 [2024-07-22 18:39:32.947631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:21.043 [2024-07-22 18:39:32.947644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:21.043 [2024-07-22 18:39:32.947655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.043 [2024-07-22 18:39:32.947732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.043 [2024-07-22 18:39:32.947750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:21.043 [2024-07-22 18:39:32.947763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:21.043 [2024-07-22 18:39:32.947779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.043 [2024-07-22 18:39:32.947799] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:21.043 [2024-07-22 18:39:32.947817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.947993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:21.043 [2024-07-22 18:39:32.948291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.948989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:21.044 [2024-07-22 18:39:32.949111] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:21.044 [2024-07-22 18:39:32.949123] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b92e00d0-5341-470f-9738-7edc0d43ccba 00:32:21.044 [2024-07-22 18:39:32.949136] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:21.044 [2024-07-22 18:39:32.949147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:21.044 [2024-07-22 18:39:32.949159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:21.044 [2024-07-22 18:39:32.949170] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:21.044 [2024-07-22 18:39:32.949182] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:21.044 [2024-07-22 18:39:32.949194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:21.044 [2024-07-22 18:39:32.949210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:21.044 [2024-07-22 18:39:32.949221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:21.044 [2024-07-22 18:39:32.949232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:21.044 [2024-07-22 18:39:32.949244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.044 [2024-07-22 18:39:32.949255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:21.044 [2024-07-22 18:39:32.949268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:32:21.044 [2024-07-22 18:39:32.949279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.044 [2024-07-22 18:39:32.966327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.044 [2024-07-22 18:39:32.966389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:21.044 [2024-07-22 18:39:32.966409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.021 ms 00:32:21.044 [2024-07-22 18:39:32.966421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.044 [2024-07-22 18:39:32.966940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.044 [2024-07-22 18:39:32.966970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:21.044 [2024-07-22 18:39:32.966985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:32:21.044 [2024-07-22 18:39:32.966997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.044 [2024-07-22 18:39:33.005423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.044 [2024-07-22 18:39:33.005494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:21.044 [2024-07-22 18:39:33.005514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.044 [2024-07-22 18:39:33.005533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.044 [2024-07-22 18:39:33.005624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.044 [2024-07-22 18:39:33.005640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:21.044 [2024-07-22 18:39:33.005653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.044 [2024-07-22 18:39:33.005664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.044 [2024-07-22 18:39:33.005760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.045 [2024-07-22 18:39:33.005780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:21.045 [2024-07-22 18:39:33.005793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.045 [2024-07-22 18:39:33.005805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.045 [2024-07-22 18:39:33.005836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.045 [2024-07-22 18:39:33.005850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:21.045 [2024-07-22 18:39:33.005862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.045 [2024-07-22 18:39:33.005874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.116807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.116883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:21.304 [2024-07-22 18:39:33.116903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.116916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.206637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.206705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:21.304 [2024-07-22 18:39:33.206725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.206747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.206829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.206846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:21.304 [2024-07-22 18:39:33.206859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.206871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.206919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.206942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:21.304 [2024-07-22 18:39:33.206954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.206966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.207071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.207097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:21.304 [2024-07-22 18:39:33.207111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.207122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.207160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.207183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:21.304 [2024-07-22 18:39:33.207202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.207213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.207260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.207276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:21.304 [2024-07-22 18:39:33.207288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.207300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.207353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.304 [2024-07-22 18:39:33.207401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:21.304 [2024-07-22 18:39:33.207415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.304 [2024-07-22 18:39:33.207426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.304 [2024-07-22 18:39:33.207574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 266.003 ms, result 0 00:32:22.678 00:32:22.678 00:32:22.678 18:39:34 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:22.678 [2024-07-22 18:39:34.495870] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:22.678 [2024-07-22 18:39:34.496047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87739 ] 00:32:22.678 [2024-07-22 18:39:34.670220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.936 [2024-07-22 18:39:34.919083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.503 [2024-07-22 18:39:35.268488] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:23.503 [2024-07-22 18:39:35.268572] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:23.503 [2024-07-22 18:39:35.431539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.503 [2024-07-22 18:39:35.431615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:23.503 [2024-07-22 18:39:35.431636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:23.503 [2024-07-22 18:39:35.431649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.503 [2024-07-22 18:39:35.431755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.503 [2024-07-22 18:39:35.431778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:23.503 [2024-07-22 18:39:35.431792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:23.503 [2024-07-22 18:39:35.431814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.503 [2024-07-22 18:39:35.431850] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:23.503 [2024-07-22 18:39:35.432865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:23.503 [2024-07-22 18:39:35.432917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.503 [2024-07-22 18:39:35.432936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:23.503 [2024-07-22 18:39:35.432950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:32:23.503 [2024-07-22 18:39:35.432962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.503 [2024-07-22 18:39:35.433479] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:23.503 [2024-07-22 18:39:35.433521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.503 [2024-07-22 18:39:35.433536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:23.503 [2024-07-22 18:39:35.433549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:23.503 [2024-07-22 18:39:35.433569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.503 [2024-07-22 18:39:35.433634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.503 [2024-07-22 18:39:35.433652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:23.503 [2024-07-22 18:39:35.433665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:23.503 [2024-07-22 18:39:35.433690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.503 [2024-07-22 18:39:35.434122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.503 [2024-07-22 18:39:35.434151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:23.504 [2024-07-22 18:39:35.434166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:32:23.504 [2024-07-22 18:39:35.434183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.504 [2024-07-22 18:39:35.434271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.504 [2024-07-22 18:39:35.434291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:23.504 [2024-07-22 18:39:35.434304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:23.504 [2024-07-22 18:39:35.434315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.504 [2024-07-22 18:39:35.434355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.504 [2024-07-22 18:39:35.434372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:23.504 [2024-07-22 18:39:35.434385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:23.504 [2024-07-22 18:39:35.434397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.504 [2024-07-22 18:39:35.434433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:23.504 [2024-07-22 18:39:35.439784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.504 [2024-07-22 18:39:35.439839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:23.504 [2024-07-22 18:39:35.439861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.355 ms 00:32:23.504 [2024-07-22 18:39:35.439873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.504 [2024-07-22 18:39:35.439936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.504 [2024-07-22 18:39:35.439954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:23.504 [2024-07-22 18:39:35.439967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:23.504 [2024-07-22 18:39:35.439979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.504 [2024-07-22 18:39:35.440066] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:23.504 [2024-07-22 18:39:35.440103] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:23.504 [2024-07-22 18:39:35.440147] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:23.504 [2024-07-22 18:39:35.440173] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:23.504 [2024-07-22 18:39:35.440276] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:23.504 [2024-07-22 18:39:35.440302] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:23.504 [2024-07-22 18:39:35.440318] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:23.504 [2024-07-22 18:39:35.440335] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:23.504 [2024-07-22 18:39:35.440349] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:23.504 [2024-07-22 18:39:35.440362] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:23.504 [2024-07-22 18:39:35.440373] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:23.504 [2024-07-22 18:39:35.440384] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:23.504 [2024-07-22 18:39:35.440401] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:23.504 [2024-07-22 18:39:35.440413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.504 [2024-07-22 18:39:35.440425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:23.504 [2024-07-22 18:39:35.440437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:32:23.504 [2024-07-22 18:39:35.440448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.504 [2024-07-22 18:39:35.440544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.504 [2024-07-22 18:39:35.440566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:23.504 [2024-07-22 18:39:35.440579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:23.504 [2024-07-22 18:39:35.440590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.504 [2024-07-22 18:39:35.440719] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:23.504 [2024-07-22 18:39:35.440741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:23.504 [2024-07-22 18:39:35.440754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:23.504 [2024-07-22 18:39:35.440767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:23.504 [2024-07-22 18:39:35.440778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:23.504 [2024-07-22 18:39:35.440789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:23.504 [2024-07-22 18:39:35.440800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:23.504 [2024-07-22 18:39:35.440810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:23.504 [2024-07-22 18:39:35.440821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:23.504 [2024-07-22 18:39:35.440832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:23.504 [2024-07-22 18:39:35.440842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:23.504 [2024-07-22 18:39:35.440853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:23.504 [2024-07-22 18:39:35.440863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:23.504 [2024-07-22 18:39:35.440874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:23.504 [2024-07-22 18:39:35.440886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:23.504 [2024-07-22 18:39:35.440897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:23.504 [2024-07-22 18:39:35.440908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:23.504 [2024-07-22 18:39:35.440919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:23.504 [2024-07-22 18:39:35.440929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:23.504 [2024-07-22 18:39:35.440940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:23.504 [2024-07-22 18:39:35.440951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:23.504 [2024-07-22 18:39:35.440962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:23.504 [2024-07-22 18:39:35.440989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:23.504 [2024-07-22 18:39:35.441000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:23.504 [2024-07-22 18:39:35.441011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:23.504 [2024-07-22 18:39:35.441022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:23.504 [2024-07-22 18:39:35.441032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:23.504 [2024-07-22 18:39:35.441042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:23.504 [2024-07-22 18:39:35.441053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:23.504 [2024-07-22 18:39:35.441063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:23.504 [2024-07-22 18:39:35.441074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:23.504 [2024-07-22 18:39:35.441084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:23.504 [2024-07-22 18:39:35.441095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:23.504 [2024-07-22 18:39:35.441106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:23.504 [2024-07-22 18:39:35.441116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:23.504 [2024-07-22 18:39:35.441127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:23.504 [2024-07-22 18:39:35.441146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:23.504 [2024-07-22 18:39:35.441157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:23.504 [2024-07-22 18:39:35.441168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:23.504 [2024-07-22 18:39:35.441178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:23.504 [2024-07-22 18:39:35.441189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:23.504 [2024-07-22 18:39:35.441199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:23.504 [2024-07-22 18:39:35.441210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:23.504 [2024-07-22 18:39:35.441220] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:23.504 [2024-07-22 18:39:35.441231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:23.504 [2024-07-22 18:39:35.441243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:23.504 [2024-07-22 18:39:35.441256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:23.504 [2024-07-22 18:39:35.441268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:23.504 [2024-07-22 18:39:35.441279] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:23.504 [2024-07-22 18:39:35.441290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:23.504 [2024-07-22 18:39:35.441301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:23.504 [2024-07-22 18:39:35.441311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:23.504 [2024-07-22 18:39:35.441322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:23.504 [2024-07-22 18:39:35.441335] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:23.504 [2024-07-22 18:39:35.441349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:23.504 [2024-07-22 18:39:35.441362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:23.504 [2024-07-22 18:39:35.441373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:23.504 [2024-07-22 18:39:35.441385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:23.504 [2024-07-22 18:39:35.441397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:23.505 [2024-07-22 18:39:35.441408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:23.505 [2024-07-22 18:39:35.441420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:23.505 [2024-07-22 18:39:35.441432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:23.505 [2024-07-22 18:39:35.441444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:23.505 [2024-07-22 18:39:35.441456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:23.505 [2024-07-22 18:39:35.441467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:23.505 [2024-07-22 18:39:35.441479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:23.505 [2024-07-22 18:39:35.441490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:23.505 [2024-07-22 18:39:35.441502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:23.505 [2024-07-22 18:39:35.441514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:23.505 [2024-07-22 18:39:35.441525] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:23.505 [2024-07-22 18:39:35.441544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:23.505 [2024-07-22 18:39:35.441557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:23.505 [2024-07-22 18:39:35.441569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:23.505 [2024-07-22 18:39:35.441581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:23.505 [2024-07-22 18:39:35.441594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:23.505 [2024-07-22 18:39:35.441607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.505 [2024-07-22 18:39:35.441619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:23.505 [2024-07-22 18:39:35.441632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:32:23.505 [2024-07-22 18:39:35.441644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.505 [2024-07-22 18:39:35.491127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.505 [2024-07-22 18:39:35.491230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:23.505 [2024-07-22 18:39:35.491267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.384 ms 00:32:23.505 [2024-07-22 18:39:35.491293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.505 [2024-07-22 18:39:35.491476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.505 [2024-07-22 18:39:35.491509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:23.505 [2024-07-22 18:39:35.491533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:32:23.505 [2024-07-22 18:39:35.491554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.553792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.553885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:23.763 [2024-07-22 18:39:35.553916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.036 ms 00:32:23.763 [2024-07-22 18:39:35.553934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.554037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.554069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:23.763 [2024-07-22 18:39:35.554090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:23.763 [2024-07-22 18:39:35.554107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.554342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.554381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:23.763 [2024-07-22 18:39:35.554404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:32:23.763 [2024-07-22 18:39:35.554422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.554626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.554664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:23.763 [2024-07-22 18:39:35.554712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:32:23.763 [2024-07-22 18:39:35.554730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.580516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.580627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:23.763 [2024-07-22 18:39:35.580662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.736 ms 00:32:23.763 [2024-07-22 18:39:35.580721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.581061] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:23.763 [2024-07-22 18:39:35.581101] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:23.763 [2024-07-22 18:39:35.581140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.581163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:23.763 [2024-07-22 18:39:35.581192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:32:23.763 [2024-07-22 18:39:35.581221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.596743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.596870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:23.763 [2024-07-22 18:39:35.596906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.465 ms 00:32:23.763 [2024-07-22 18:39:35.596929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.597187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.597223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:23.763 [2024-07-22 18:39:35.597249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:32:23.763 [2024-07-22 18:39:35.597272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.763 [2024-07-22 18:39:35.597408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.763 [2024-07-22 18:39:35.597473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:23.763 [2024-07-22 18:39:35.597501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:23.764 [2024-07-22 18:39:35.597521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.598633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.598723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:23.764 [2024-07-22 18:39:35.598755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:32:23.764 [2024-07-22 18:39:35.598785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.598831] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:23.764 [2024-07-22 18:39:35.598864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.598916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:23.764 [2024-07-22 18:39:35.598937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:23.764 [2024-07-22 18:39:35.598954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.619316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:23.764 [2024-07-22 18:39:35.619712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.619741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:23.764 [2024-07-22 18:39:35.619759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.710 ms 00:32:23.764 [2024-07-22 18:39:35.619772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.622119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.622154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:23.764 [2024-07-22 18:39:35.622176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.270 ms 00:32:23.764 [2024-07-22 18:39:35.622187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.622346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.622376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:23.764 [2024-07-22 18:39:35.622390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:23.764 [2024-07-22 18:39:35.622402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.622437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.622454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:23.764 [2024-07-22 18:39:35.622466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:23.764 [2024-07-22 18:39:35.622486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.622528] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:23.764 [2024-07-22 18:39:35.622545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.622557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:23.764 [2024-07-22 18:39:35.622569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:23.764 [2024-07-22 18:39:35.622581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.655188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.655270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:23.764 [2024-07-22 18:39:35.655304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.578 ms 00:32:23.764 [2024-07-22 18:39:35.655317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.655445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.764 [2024-07-22 18:39:35.655465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:23.764 [2024-07-22 18:39:35.655479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:23.764 [2024-07-22 18:39:35.655491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.764 [2024-07-22 18:39:35.656947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.856 ms, result 0 00:33:01.832  Copying: 26/1024 [MB] (26 MBps) Copying: 55/1024 [MB] (28 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 137/1024 [MB] (27 MBps) Copying: 165/1024 [MB] (27 MBps) Copying: 193/1024 [MB] (28 MBps) Copying: 220/1024 [MB] (26 MBps) Copying: 246/1024 [MB] (26 MBps) Copying: 273/1024 [MB] (26 MBps) Copying: 299/1024 [MB] (25 MBps) Copying: 324/1024 [MB] (25 MBps) Copying: 352/1024 [MB] (27 MBps) Copying: 378/1024 [MB] (26 MBps) Copying: 406/1024 [MB] (28 MBps) Copying: 433/1024 [MB] (26 MBps) Copying: 460/1024 [MB] (27 MBps) Copying: 488/1024 [MB] (27 MBps) Copying: 515/1024 [MB] (27 MBps) Copying: 543/1024 [MB] (28 MBps) Copying: 572/1024 [MB] (28 MBps) Copying: 598/1024 [MB] (26 MBps) Copying: 624/1024 [MB] (25 MBps) Copying: 652/1024 [MB] (28 MBps) Copying: 680/1024 [MB] (28 MBps) Copying: 708/1024 [MB] (27 MBps) Copying: 735/1024 [MB] (27 MBps) Copying: 762/1024 [MB] (27 MBps) Copying: 789/1024 [MB] (26 MBps) Copying: 816/1024 [MB] (27 MBps) Copying: 844/1024 [MB] (28 MBps) Copying: 868/1024 [MB] (24 MBps) Copying: 896/1024 [MB] (27 MBps) Copying: 923/1024 [MB] (26 MBps) Copying: 950/1024 [MB] (27 MBps) Copying: 975/1024 [MB] (24 MBps) Copying: 1002/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-22 18:40:13.820317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.832 [2024-07-22 18:40:13.820412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:01.832 [2024-07-22 18:40:13.820439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:01.832 [2024-07-22 18:40:13.820455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.832 [2024-07-22 18:40:13.820495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:01.832 [2024-07-22 18:40:13.825001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.832 [2024-07-22 18:40:13.825057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:01.832 [2024-07-22 18:40:13.825076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.476 ms 00:33:01.832 [2024-07-22 18:40:13.825091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.832 [2024-07-22 18:40:13.825430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.832 [2024-07-22 18:40:13.825474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:01.832 [2024-07-22 18:40:13.826181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:33:01.832 [2024-07-22 18:40:13.826199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.832 [2024-07-22 18:40:13.826246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.832 [2024-07-22 18:40:13.826265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:01.832 [2024-07-22 18:40:13.826290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:01.832 [2024-07-22 18:40:13.826304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.832 [2024-07-22 18:40:13.826377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.832 [2024-07-22 18:40:13.826396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:01.832 [2024-07-22 18:40:13.826412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:01.832 [2024-07-22 18:40:13.826425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.832 [2024-07-22 18:40:13.826451] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:01.832 [2024-07-22 18:40:13.826479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.826987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:01.832 [2024-07-22 18:40:13.827003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.827986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:01.833 [2024-07-22 18:40:13.828169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:01.833 [2024-07-22 18:40:13.828183] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b92e00d0-5341-470f-9738-7edc0d43ccba 00:33:01.833 [2024-07-22 18:40:13.828198] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:01.833 [2024-07-22 18:40:13.828219] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:01.833 [2024-07-22 18:40:13.828233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:01.833 [2024-07-22 18:40:13.828247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:01.833 [2024-07-22 18:40:13.828261] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:01.833 [2024-07-22 18:40:13.828275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:01.833 [2024-07-22 18:40:13.828289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:01.833 [2024-07-22 18:40:13.828302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:01.833 [2024-07-22 18:40:13.828315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:01.833 [2024-07-22 18:40:13.828335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.833 [2024-07-22 18:40:13.828350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:01.833 [2024-07-22 18:40:13.828365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:33:01.833 [2024-07-22 18:40:13.828379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.092 [2024-07-22 18:40:13.849967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.092 [2024-07-22 18:40:13.850042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:02.092 [2024-07-22 18:40:13.850065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.552 ms 00:33:02.092 [2024-07-22 18:40:13.850080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.092 [2024-07-22 18:40:13.850663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.092 [2024-07-22 18:40:13.850730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:02.092 [2024-07-22 18:40:13.850750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:33:02.092 [2024-07-22 18:40:13.850764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.092 [2024-07-22 18:40:13.898080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.092 [2024-07-22 18:40:13.898158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:02.092 [2024-07-22 18:40:13.898180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.092 [2024-07-22 18:40:13.898195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.092 [2024-07-22 18:40:13.898300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.092 [2024-07-22 18:40:13.898319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:02.092 [2024-07-22 18:40:13.898335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.092 [2024-07-22 18:40:13.898349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.092 [2024-07-22 18:40:13.898458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.092 [2024-07-22 18:40:13.898484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:02.092 [2024-07-22 18:40:13.898501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.092 [2024-07-22 18:40:13.898515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.092 [2024-07-22 18:40:13.898544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.092 [2024-07-22 18:40:13.898561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:02.092 [2024-07-22 18:40:13.898576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.092 [2024-07-22 18:40:13.898590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.092 [2024-07-22 18:40:14.029992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.092 [2024-07-22 18:40:14.030072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:02.092 [2024-07-22 18:40:14.030096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.092 [2024-07-22 18:40:14.030110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.140435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.350 [2024-07-22 18:40:14.140520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:02.350 [2024-07-22 18:40:14.140542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.350 [2024-07-22 18:40:14.140557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.140651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.350 [2024-07-22 18:40:14.140700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:02.350 [2024-07-22 18:40:14.140718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.350 [2024-07-22 18:40:14.140733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.140790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.350 [2024-07-22 18:40:14.140809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:02.350 [2024-07-22 18:40:14.140824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.350 [2024-07-22 18:40:14.140838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.140959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.350 [2024-07-22 18:40:14.141001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:02.350 [2024-07-22 18:40:14.141018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.350 [2024-07-22 18:40:14.141032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.141081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.350 [2024-07-22 18:40:14.141110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:02.350 [2024-07-22 18:40:14.141126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.350 [2024-07-22 18:40:14.141140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.141204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.350 [2024-07-22 18:40:14.141223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:02.350 [2024-07-22 18:40:14.141245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.350 [2024-07-22 18:40:14.141259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.141321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.350 [2024-07-22 18:40:14.141353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:02.350 [2024-07-22 18:40:14.141370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.350 [2024-07-22 18:40:14.141385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.350 [2024-07-22 18:40:14.141562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 321.202 ms, result 0 00:33:03.285 00:33:03.285 00:33:03.285 18:40:15 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:05.816 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:05.816 18:40:17 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:05.816 [2024-07-22 18:40:17.612355] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:05.816 [2024-07-22 18:40:17.612555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88155 ] 00:33:05.816 [2024-07-22 18:40:17.789835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.075 [2024-07-22 18:40:18.074892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.643 [2024-07-22 18:40:18.432359] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:06.643 [2024-07-22 18:40:18.432452] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:06.643 [2024-07-22 18:40:18.595058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.595130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:06.643 [2024-07-22 18:40:18.595162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:06.643 [2024-07-22 18:40:18.595182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.595297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.595326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:06.643 [2024-07-22 18:40:18.595347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:33:06.643 [2024-07-22 18:40:18.595391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.595445] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:06.643 [2024-07-22 18:40:18.596625] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:06.643 [2024-07-22 18:40:18.596671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.596720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:06.643 [2024-07-22 18:40:18.596743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:33:06.643 [2024-07-22 18:40:18.596763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.597405] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:06.643 [2024-07-22 18:40:18.597453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.597477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:06.643 [2024-07-22 18:40:18.597500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:33:06.643 [2024-07-22 18:40:18.597530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.597626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.597653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:06.643 [2024-07-22 18:40:18.597674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:06.643 [2024-07-22 18:40:18.597711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.598292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.598329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:06.643 [2024-07-22 18:40:18.598353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:33:06.643 [2024-07-22 18:40:18.598381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.598497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.598526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:06.643 [2024-07-22 18:40:18.598546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:33:06.643 [2024-07-22 18:40:18.598565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.598625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.598660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:06.643 [2024-07-22 18:40:18.598698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:06.643 [2024-07-22 18:40:18.598722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.598779] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:06.643 [2024-07-22 18:40:18.604054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.604097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:06.643 [2024-07-22 18:40:18.604129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.284 ms 00:33:06.643 [2024-07-22 18:40:18.604149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.604225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.604253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:06.643 [2024-07-22 18:40:18.604273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:06.643 [2024-07-22 18:40:18.604291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.604392] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:06.643 [2024-07-22 18:40:18.604449] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:06.643 [2024-07-22 18:40:18.604510] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:06.643 [2024-07-22 18:40:18.604550] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:33:06.643 [2024-07-22 18:40:18.604703] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:06.643 [2024-07-22 18:40:18.604734] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:06.643 [2024-07-22 18:40:18.604759] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:06.643 [2024-07-22 18:40:18.604784] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:06.643 [2024-07-22 18:40:18.604814] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:06.643 [2024-07-22 18:40:18.604835] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:06.643 [2024-07-22 18:40:18.604853] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:06.643 [2024-07-22 18:40:18.604871] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:06.643 [2024-07-22 18:40:18.604895] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:06.643 [2024-07-22 18:40:18.604916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.604935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:06.643 [2024-07-22 18:40:18.604955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:33:06.643 [2024-07-22 18:40:18.604974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.605100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.643 [2024-07-22 18:40:18.605126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:06.643 [2024-07-22 18:40:18.605146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:33:06.643 [2024-07-22 18:40:18.605165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.643 [2024-07-22 18:40:18.605316] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:06.643 [2024-07-22 18:40:18.605345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:06.643 [2024-07-22 18:40:18.605366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:06.643 [2024-07-22 18:40:18.605385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.643 [2024-07-22 18:40:18.605406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:06.643 [2024-07-22 18:40:18.605424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:06.643 [2024-07-22 18:40:18.605443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:06.643 [2024-07-22 18:40:18.605461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:06.643 [2024-07-22 18:40:18.605479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:06.643 [2024-07-22 18:40:18.605497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:06.644 [2024-07-22 18:40:18.605514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:06.644 [2024-07-22 18:40:18.605532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:06.644 [2024-07-22 18:40:18.605549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:06.644 [2024-07-22 18:40:18.605566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:06.644 [2024-07-22 18:40:18.605585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:06.644 [2024-07-22 18:40:18.605603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.644 [2024-07-22 18:40:18.605620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:06.644 [2024-07-22 18:40:18.605639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:06.644 [2024-07-22 18:40:18.605657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.644 [2024-07-22 18:40:18.605674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:06.644 [2024-07-22 18:40:18.605713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:06.644 [2024-07-22 18:40:18.605731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.644 [2024-07-22 18:40:18.605766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:06.644 [2024-07-22 18:40:18.605786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:06.644 [2024-07-22 18:40:18.605803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.644 [2024-07-22 18:40:18.605820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:06.644 [2024-07-22 18:40:18.605838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:06.644 [2024-07-22 18:40:18.605856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.644 [2024-07-22 18:40:18.605873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:06.644 [2024-07-22 18:40:18.605891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:06.644 [2024-07-22 18:40:18.605908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:06.644 [2024-07-22 18:40:18.605926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:06.644 [2024-07-22 18:40:18.605946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:06.644 [2024-07-22 18:40:18.605966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:06.644 [2024-07-22 18:40:18.605985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:06.644 [2024-07-22 18:40:18.606005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:06.644 [2024-07-22 18:40:18.606022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:06.644 [2024-07-22 18:40:18.606040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:06.644 [2024-07-22 18:40:18.606059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:06.644 [2024-07-22 18:40:18.606078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.644 [2024-07-22 18:40:18.606097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:06.644 [2024-07-22 18:40:18.606115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:06.644 [2024-07-22 18:40:18.606134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.644 [2024-07-22 18:40:18.606152] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:06.644 [2024-07-22 18:40:18.606171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:06.644 [2024-07-22 18:40:18.606189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:06.644 [2024-07-22 18:40:18.606209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:06.644 [2024-07-22 18:40:18.606229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:06.644 [2024-07-22 18:40:18.606246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:06.644 [2024-07-22 18:40:18.606265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:06.644 [2024-07-22 18:40:18.606284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:06.644 [2024-07-22 18:40:18.606301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:06.644 [2024-07-22 18:40:18.606320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:06.644 [2024-07-22 18:40:18.606340] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:06.644 [2024-07-22 18:40:18.606363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:06.644 [2024-07-22 18:40:18.606383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:06.644 [2024-07-22 18:40:18.606402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:06.644 [2024-07-22 18:40:18.606420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:06.644 [2024-07-22 18:40:18.606439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:06.644 [2024-07-22 18:40:18.606458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:06.644 [2024-07-22 18:40:18.606477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:06.644 [2024-07-22 18:40:18.606496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:06.644 [2024-07-22 18:40:18.606516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:06.644 [2024-07-22 18:40:18.606534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:06.644 [2024-07-22 18:40:18.606553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:06.644 [2024-07-22 18:40:18.606574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:06.644 [2024-07-22 18:40:18.606594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:06.644 [2024-07-22 18:40:18.606613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:06.644 [2024-07-22 18:40:18.606633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:06.644 [2024-07-22 18:40:18.606652] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:06.644 [2024-07-22 18:40:18.606695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:06.644 [2024-07-22 18:40:18.606720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:06.644 [2024-07-22 18:40:18.606741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:06.644 [2024-07-22 18:40:18.606760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:06.644 [2024-07-22 18:40:18.606779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:06.644 [2024-07-22 18:40:18.606801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.644 [2024-07-22 18:40:18.606819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:06.644 [2024-07-22 18:40:18.606840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:33:06.644 [2024-07-22 18:40:18.606868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.662529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.662597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:06.904 [2024-07-22 18:40:18.662619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.515 ms 00:33:06.904 [2024-07-22 18:40:18.662632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.662779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.662798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:06.904 [2024-07-22 18:40:18.662812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:33:06.904 [2024-07-22 18:40:18.662824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.706248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.706315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:06.904 [2024-07-22 18:40:18.706335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.289 ms 00:33:06.904 [2024-07-22 18:40:18.706347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.706431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.706449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:06.904 [2024-07-22 18:40:18.706468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:06.904 [2024-07-22 18:40:18.706480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.706660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.706708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:06.904 [2024-07-22 18:40:18.706733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:33:06.904 [2024-07-22 18:40:18.706748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.706987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.707038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:06.904 [2024-07-22 18:40:18.707063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:33:06.904 [2024-07-22 18:40:18.707096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.725637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.725723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:06.904 [2024-07-22 18:40:18.725749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.499 ms 00:33:06.904 [2024-07-22 18:40:18.725762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.726008] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:06.904 [2024-07-22 18:40:18.726052] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:06.904 [2024-07-22 18:40:18.726079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.726101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:06.904 [2024-07-22 18:40:18.726122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:33:06.904 [2024-07-22 18:40:18.726134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.739728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.739800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:06.904 [2024-07-22 18:40:18.739820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.542 ms 00:33:06.904 [2024-07-22 18:40:18.739832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.740011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.740046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:06.904 [2024-07-22 18:40:18.740070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:33:06.904 [2024-07-22 18:40:18.740094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.740220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.740266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:06.904 [2024-07-22 18:40:18.740306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:06.904 [2024-07-22 18:40:18.740326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.741347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.741383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:06.904 [2024-07-22 18:40:18.741398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:33:06.904 [2024-07-22 18:40:18.741410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.741436] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:06.904 [2024-07-22 18:40:18.741453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.741479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:06.904 [2024-07-22 18:40:18.741496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:06.904 [2024-07-22 18:40:18.741507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.757601] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:06.904 [2024-07-22 18:40:18.757923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.757944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:06.904 [2024-07-22 18:40:18.757961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.386 ms 00:33:06.904 [2024-07-22 18:40:18.757976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.760456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.760494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:06.904 [2024-07-22 18:40:18.760509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:33:06.904 [2024-07-22 18:40:18.760526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.760669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.760704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:06.904 [2024-07-22 18:40:18.760719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:06.904 [2024-07-22 18:40:18.760731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.760785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.760812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:06.904 [2024-07-22 18:40:18.760830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:06.904 [2024-07-22 18:40:18.760848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.760904] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:06.904 [2024-07-22 18:40:18.760931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.760952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:06.904 [2024-07-22 18:40:18.760974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:06.904 [2024-07-22 18:40:18.760996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.904 [2024-07-22 18:40:18.795566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.904 [2024-07-22 18:40:18.795647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:06.905 [2024-07-22 18:40:18.795689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.530 ms 00:33:06.905 [2024-07-22 18:40:18.795705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.905 [2024-07-22 18:40:18.795831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.905 [2024-07-22 18:40:18.795860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:06.905 [2024-07-22 18:40:18.795874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:06.905 [2024-07-22 18:40:18.795886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.905 [2024-07-22 18:40:18.797446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 201.865 ms, result 0 00:33:47.544  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (27 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 106/1024 [MB] (27 MBps) Copying: 134/1024 [MB] (27 MBps) Copying: 161/1024 [MB] (26 MBps) Copying: 186/1024 [MB] (25 MBps) Copying: 213/1024 [MB] (26 MBps) Copying: 239/1024 [MB] (26 MBps) Copying: 265/1024 [MB] (26 MBps) Copying: 291/1024 [MB] (25 MBps) Copying: 317/1024 [MB] (26 MBps) Copying: 344/1024 [MB] (26 MBps) Copying: 369/1024 [MB] (25 MBps) Copying: 394/1024 [MB] (25 MBps) Copying: 421/1024 [MB] (26 MBps) Copying: 447/1024 [MB] (26 MBps) Copying: 473/1024 [MB] (26 MBps) Copying: 500/1024 [MB] (26 MBps) Copying: 525/1024 [MB] (25 MBps) Copying: 551/1024 [MB] (25 MBps) Copying: 577/1024 [MB] (26 MBps) Copying: 603/1024 [MB] (25 MBps) Copying: 628/1024 [MB] (25 MBps) Copying: 654/1024 [MB] (25 MBps) Copying: 680/1024 [MB] (25 MBps) Copying: 705/1024 [MB] (25 MBps) Copying: 730/1024 [MB] (25 MBps) Copying: 756/1024 [MB] (25 MBps) Copying: 782/1024 [MB] (25 MBps) Copying: 809/1024 [MB] (26 MBps) Copying: 835/1024 [MB] (25 MBps) Copying: 861/1024 [MB] (25 MBps) Copying: 886/1024 [MB] (25 MBps) Copying: 912/1024 [MB] (25 MBps) Copying: 937/1024 [MB] (24 MBps) Copying: 962/1024 [MB] (25 MBps) Copying: 988/1024 [MB] (25 MBps) Copying: 1013/1024 [MB] (25 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-22 18:40:59.385262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.544 [2024-07-22 18:40:59.385340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:47.544 [2024-07-22 18:40:59.385373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:47.544 [2024-07-22 18:40:59.385385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.544 [2024-07-22 18:40:59.388579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:47.544 [2024-07-22 18:40:59.393628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.544 [2024-07-22 18:40:59.393671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:47.544 [2024-07-22 18:40:59.393699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:33:47.544 [2024-07-22 18:40:59.393718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.544 [2024-07-22 18:40:59.404909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.544 [2024-07-22 18:40:59.404955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:47.544 [2024-07-22 18:40:59.404973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.843 ms 00:33:47.544 [2024-07-22 18:40:59.404997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.544 [2024-07-22 18:40:59.405037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.544 [2024-07-22 18:40:59.405053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:47.544 [2024-07-22 18:40:59.405066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:47.544 [2024-07-22 18:40:59.405077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.544 [2024-07-22 18:40:59.405137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.544 [2024-07-22 18:40:59.405153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:47.544 [2024-07-22 18:40:59.405166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:47.544 [2024-07-22 18:40:59.405177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.544 [2024-07-22 18:40:59.405203] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:47.544 [2024-07-22 18:40:59.405220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130560 / 261120 wr_cnt: 1 state: open 00:33:47.544 [2024-07-22 18:40:59.405235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:47.544 [2024-07-22 18:40:59.405845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.405998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:47.545 [2024-07-22 18:40:59.406443] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:47.545 [2024-07-22 18:40:59.406455] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b92e00d0-5341-470f-9738-7edc0d43ccba 00:33:47.545 [2024-07-22 18:40:59.406466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130560 00:33:47.545 [2024-07-22 18:40:59.406477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130592 00:33:47.545 [2024-07-22 18:40:59.406488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130560 00:33:47.545 [2024-07-22 18:40:59.406504] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:47.545 [2024-07-22 18:40:59.406515] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:47.545 [2024-07-22 18:40:59.406527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:47.545 [2024-07-22 18:40:59.406537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:47.545 [2024-07-22 18:40:59.406547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:47.545 [2024-07-22 18:40:59.406558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:47.545 [2024-07-22 18:40:59.406569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.545 [2024-07-22 18:40:59.406581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:47.545 [2024-07-22 18:40:59.406597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:33:47.545 [2024-07-22 18:40:59.406608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.545 [2024-07-22 18:40:59.423610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.545 [2024-07-22 18:40:59.423653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:47.545 [2024-07-22 18:40:59.423671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.976 ms 00:33:47.545 [2024-07-22 18:40:59.423707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.545 [2024-07-22 18:40:59.424220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.545 [2024-07-22 18:40:59.424252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:47.545 [2024-07-22 18:40:59.424267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:33:47.545 [2024-07-22 18:40:59.424279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.545 [2024-07-22 18:40:59.462923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.545 [2024-07-22 18:40:59.462982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:47.545 [2024-07-22 18:40:59.463000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.545 [2024-07-22 18:40:59.463012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.545 [2024-07-22 18:40:59.463096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.545 [2024-07-22 18:40:59.463112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:47.545 [2024-07-22 18:40:59.463124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.545 [2024-07-22 18:40:59.463135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.545 [2024-07-22 18:40:59.463210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.545 [2024-07-22 18:40:59.463231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:47.545 [2024-07-22 18:40:59.463244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.545 [2024-07-22 18:40:59.463263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.545 [2024-07-22 18:40:59.463294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.545 [2024-07-22 18:40:59.463317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:47.545 [2024-07-22 18:40:59.463335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.545 [2024-07-22 18:40:59.463346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.570724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.570797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:47.805 [2024-07-22 18:40:59.570818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.570830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.658478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.658550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:47.805 [2024-07-22 18:40:59.658571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.658583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.658670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.658711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:47.805 [2024-07-22 18:40:59.658726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.658738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.658789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.658813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:47.805 [2024-07-22 18:40:59.658826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.658838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.658945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.658966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:47.805 [2024-07-22 18:40:59.658978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.658989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.659033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.659051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:47.805 [2024-07-22 18:40:59.659070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.659082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.659127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.659142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:47.805 [2024-07-22 18:40:59.659153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.659165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.659216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.805 [2024-07-22 18:40:59.659238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:47.805 [2024-07-22 18:40:59.659251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.805 [2024-07-22 18:40:59.659262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.805 [2024-07-22 18:40:59.659423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 276.613 ms, result 0 00:33:49.185 00:33:49.185 00:33:49.185 18:41:01 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:49.444 [2024-07-22 18:41:01.295154] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:49.444 [2024-07-22 18:41:01.295355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88571 ] 00:33:49.703 [2024-07-22 18:41:01.464991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.703 [2024-07-22 18:41:01.702809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.272 [2024-07-22 18:41:02.049493] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:50.272 [2024-07-22 18:41:02.049576] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:50.272 [2024-07-22 18:41:02.212208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.212272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:50.272 [2024-07-22 18:41:02.212292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:50.272 [2024-07-22 18:41:02.212305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.212388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.212409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:50.272 [2024-07-22 18:41:02.212437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:50.272 [2024-07-22 18:41:02.212453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.212486] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:50.272 [2024-07-22 18:41:02.213389] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:50.272 [2024-07-22 18:41:02.213429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.213447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:50.272 [2024-07-22 18:41:02.213460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:33:50.272 [2024-07-22 18:41:02.213471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.213980] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:50.272 [2024-07-22 18:41:02.214021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.214035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:50.272 [2024-07-22 18:41:02.214048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:50.272 [2024-07-22 18:41:02.214076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.214138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.214155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:50.272 [2024-07-22 18:41:02.214167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:50.272 [2024-07-22 18:41:02.214178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.214600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.214627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:50.272 [2024-07-22 18:41:02.214641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:33:50.272 [2024-07-22 18:41:02.214657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.214777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.214802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:50.272 [2024-07-22 18:41:02.214815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:33:50.272 [2024-07-22 18:41:02.214826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.214862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.214877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:50.272 [2024-07-22 18:41:02.214889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:50.272 [2024-07-22 18:41:02.214900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.214932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:50.272 [2024-07-22 18:41:02.220108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.220144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:50.272 [2024-07-22 18:41:02.220164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:33:50.272 [2024-07-22 18:41:02.220175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.220218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.220233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:50.272 [2024-07-22 18:41:02.220245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:50.272 [2024-07-22 18:41:02.220255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.220316] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:50.272 [2024-07-22 18:41:02.220360] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:50.272 [2024-07-22 18:41:02.220406] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:50.272 [2024-07-22 18:41:02.220441] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:33:50.272 [2024-07-22 18:41:02.220542] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:50.272 [2024-07-22 18:41:02.220558] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:50.272 [2024-07-22 18:41:02.220572] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:50.272 [2024-07-22 18:41:02.220587] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:50.272 [2024-07-22 18:41:02.220600] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:50.272 [2024-07-22 18:41:02.220612] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:50.272 [2024-07-22 18:41:02.220623] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:50.272 [2024-07-22 18:41:02.220634] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:50.272 [2024-07-22 18:41:02.220649] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:50.272 [2024-07-22 18:41:02.220661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.220672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:50.272 [2024-07-22 18:41:02.220702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:33:50.272 [2024-07-22 18:41:02.220714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.220818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.272 [2024-07-22 18:41:02.220833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:50.272 [2024-07-22 18:41:02.220845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:50.272 [2024-07-22 18:41:02.220855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.272 [2024-07-22 18:41:02.220961] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:50.272 [2024-07-22 18:41:02.220987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:50.272 [2024-07-22 18:41:02.221000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:50.272 [2024-07-22 18:41:02.221011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.272 [2024-07-22 18:41:02.221022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:50.272 [2024-07-22 18:41:02.221032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:50.272 [2024-07-22 18:41:02.221042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:50.273 [2024-07-22 18:41:02.221062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:50.273 [2024-07-22 18:41:02.221082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:50.273 [2024-07-22 18:41:02.221092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:50.273 [2024-07-22 18:41:02.221102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:50.273 [2024-07-22 18:41:02.221113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:50.273 [2024-07-22 18:41:02.221123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:50.273 [2024-07-22 18:41:02.221133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:50.273 [2024-07-22 18:41:02.221154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:50.273 [2024-07-22 18:41:02.221186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:50.273 [2024-07-22 18:41:02.221232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:50.273 [2024-07-22 18:41:02.221263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221273] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:50.273 [2024-07-22 18:41:02.221293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:50.273 [2024-07-22 18:41:02.221324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:50.273 [2024-07-22 18:41:02.221345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:50.273 [2024-07-22 18:41:02.221355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:50.273 [2024-07-22 18:41:02.221365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:50.273 [2024-07-22 18:41:02.221375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:50.273 [2024-07-22 18:41:02.221385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:50.273 [2024-07-22 18:41:02.221394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:50.273 [2024-07-22 18:41:02.221414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:50.273 [2024-07-22 18:41:02.221424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221434] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:50.273 [2024-07-22 18:41:02.221445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:50.273 [2024-07-22 18:41:02.221455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.273 [2024-07-22 18:41:02.221477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:50.273 [2024-07-22 18:41:02.221487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:50.273 [2024-07-22 18:41:02.221497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:50.273 [2024-07-22 18:41:02.221507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:50.273 [2024-07-22 18:41:02.221519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:50.273 [2024-07-22 18:41:02.221530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:50.273 [2024-07-22 18:41:02.221541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:50.273 [2024-07-22 18:41:02.221555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:50.273 [2024-07-22 18:41:02.221568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:50.273 [2024-07-22 18:41:02.221579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:50.273 [2024-07-22 18:41:02.221590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:50.273 [2024-07-22 18:41:02.221601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:50.273 [2024-07-22 18:41:02.221613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:50.273 [2024-07-22 18:41:02.221624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:50.273 [2024-07-22 18:41:02.221635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:50.273 [2024-07-22 18:41:02.221646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:50.273 [2024-07-22 18:41:02.221657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:50.273 [2024-07-22 18:41:02.221667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:50.273 [2024-07-22 18:41:02.221700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:50.273 [2024-07-22 18:41:02.221722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:50.273 [2024-07-22 18:41:02.221733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:50.273 [2024-07-22 18:41:02.221755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:50.273 [2024-07-22 18:41:02.221766] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:50.273 [2024-07-22 18:41:02.221784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:50.273 [2024-07-22 18:41:02.221796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:50.273 [2024-07-22 18:41:02.221808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:50.273 [2024-07-22 18:41:02.221819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:50.273 [2024-07-22 18:41:02.221831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:50.273 [2024-07-22 18:41:02.221843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.273 [2024-07-22 18:41:02.221854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:50.273 [2024-07-22 18:41:02.221866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:33:50.273 [2024-07-22 18:41:02.221877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.273 [2024-07-22 18:41:02.269876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.273 [2024-07-22 18:41:02.269934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:50.273 [2024-07-22 18:41:02.269954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.939 ms 00:33:50.273 [2024-07-22 18:41:02.269967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.273 [2024-07-22 18:41:02.270092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.273 [2024-07-22 18:41:02.270116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:50.273 [2024-07-22 18:41:02.270130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:33:50.273 [2024-07-22 18:41:02.270142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.315094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.315157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:50.533 [2024-07-22 18:41:02.315190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.850 ms 00:33:50.533 [2024-07-22 18:41:02.315202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.315268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.315284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:50.533 [2024-07-22 18:41:02.315303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:50.533 [2024-07-22 18:41:02.315314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.315485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.315503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:50.533 [2024-07-22 18:41:02.315517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:50.533 [2024-07-22 18:41:02.315529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.315703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.315728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:50.533 [2024-07-22 18:41:02.315741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:33:50.533 [2024-07-22 18:41:02.315757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.334885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.334944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:50.533 [2024-07-22 18:41:02.334997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.093 ms 00:33:50.533 [2024-07-22 18:41:02.335008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.335184] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:50.533 [2024-07-22 18:41:02.335208] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:50.533 [2024-07-22 18:41:02.335222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.335234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:50.533 [2024-07-22 18:41:02.335246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:33:50.533 [2024-07-22 18:41:02.335257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.349603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.349635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:50.533 [2024-07-22 18:41:02.349649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.312 ms 00:33:50.533 [2024-07-22 18:41:02.349660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.349798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.349828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:50.533 [2024-07-22 18:41:02.349843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:33:50.533 [2024-07-22 18:41:02.349854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.349926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.349955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:50.533 [2024-07-22 18:41:02.349972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:50.533 [2024-07-22 18:41:02.349983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.350712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.350743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:50.533 [2024-07-22 18:41:02.350757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:33:50.533 [2024-07-22 18:41:02.350767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.350796] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:50.533 [2024-07-22 18:41:02.350813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.350824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:50.533 [2024-07-22 18:41:02.350854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:50.533 [2024-07-22 18:41:02.350865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.365003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:50.533 [2024-07-22 18:41:02.365272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.365301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:50.533 [2024-07-22 18:41:02.365316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.381 ms 00:33:50.533 [2024-07-22 18:41:02.365328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.367738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.367769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:50.533 [2024-07-22 18:41:02.367788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:33:50.533 [2024-07-22 18:41:02.367799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.367883] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:50.533 [2024-07-22 18:41:02.368412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.368441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:50.533 [2024-07-22 18:41:02.368455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:33:50.533 [2024-07-22 18:41:02.368465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.368501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.368517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:50.533 [2024-07-22 18:41:02.368535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:50.533 [2024-07-22 18:41:02.368546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.368584] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:50.533 [2024-07-22 18:41:02.368600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.368611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:50.533 [2024-07-22 18:41:02.368623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:50.533 [2024-07-22 18:41:02.368634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.400061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.400109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:50.533 [2024-07-22 18:41:02.400126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.396 ms 00:33:50.533 [2024-07-22 18:41:02.400137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.533 [2024-07-22 18:41:02.400218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.533 [2024-07-22 18:41:02.400237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:50.533 [2024-07-22 18:41:02.400250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:50.534 [2024-07-22 18:41:02.400261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.534 [2024-07-22 18:41:02.409757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 195.501 ms, result 0 00:34:30.427  Copying: 27/1024 [MB] (27 MBps) Copying: 52/1024 [MB] (25 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (25 MBps) Copying: 158/1024 [MB] (26 MBps) Copying: 184/1024 [MB] (26 MBps) Copying: 211/1024 [MB] (26 MBps) Copying: 238/1024 [MB] (26 MBps) Copying: 265/1024 [MB] (27 MBps) Copying: 292/1024 [MB] (27 MBps) Copying: 319/1024 [MB] (26 MBps) Copying: 346/1024 [MB] (26 MBps) Copying: 371/1024 [MB] (25 MBps) Copying: 397/1024 [MB] (26 MBps) Copying: 424/1024 [MB] (26 MBps) Copying: 451/1024 [MB] (27 MBps) Copying: 477/1024 [MB] (25 MBps) Copying: 503/1024 [MB] (26 MBps) Copying: 529/1024 [MB] (25 MBps) Copying: 554/1024 [MB] (25 MBps) Copying: 580/1024 [MB] (25 MBps) Copying: 604/1024 [MB] (24 MBps) Copying: 629/1024 [MB] (25 MBps) Copying: 655/1024 [MB] (25 MBps) Copying: 679/1024 [MB] (24 MBps) Copying: 704/1024 [MB] (24 MBps) Copying: 729/1024 [MB] (24 MBps) Copying: 754/1024 [MB] (25 MBps) Copying: 779/1024 [MB] (25 MBps) Copying: 804/1024 [MB] (25 MBps) Copying: 830/1024 [MB] (25 MBps) Copying: 856/1024 [MB] (25 MBps) Copying: 881/1024 [MB] (25 MBps) Copying: 907/1024 [MB] (26 MBps) Copying: 934/1024 [MB] (26 MBps) Copying: 960/1024 [MB] (26 MBps) Copying: 986/1024 [MB] (26 MBps) Copying: 1012/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-22 18:41:42.279996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.427 [2024-07-22 18:41:42.280090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:30.427 [2024-07-22 18:41:42.280113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:30.427 [2024-07-22 18:41:42.280135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.427 [2024-07-22 18:41:42.280168] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:30.427 [2024-07-22 18:41:42.284341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.427 [2024-07-22 18:41:42.284377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:30.427 [2024-07-22 18:41:42.284392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.148 ms 00:34:30.427 [2024-07-22 18:41:42.284404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.427 [2024-07-22 18:41:42.284656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.427 [2024-07-22 18:41:42.284702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:30.427 [2024-07-22 18:41:42.284717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:34:30.427 [2024-07-22 18:41:42.284728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.427 [2024-07-22 18:41:42.284766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.427 [2024-07-22 18:41:42.284782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:30.427 [2024-07-22 18:41:42.284794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:30.427 [2024-07-22 18:41:42.284805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.427 [2024-07-22 18:41:42.284869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.427 [2024-07-22 18:41:42.284884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:30.427 [2024-07-22 18:41:42.284896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:30.427 [2024-07-22 18:41:42.284912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.427 [2024-07-22 18:41:42.284933] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:30.427 [2024-07-22 18:41:42.284951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:34:30.427 [2024-07-22 18:41:42.284965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.284977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.284989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.285999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:30.427 [2024-07-22 18:41:42.286208] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:30.427 [2024-07-22 18:41:42.286220] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b92e00d0-5341-470f-9738-7edc0d43ccba 00:34:30.427 [2024-07-22 18:41:42.286232] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:34:30.427 [2024-07-22 18:41:42.286244] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3360 00:34:30.427 [2024-07-22 18:41:42.286255] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:34:30.427 [2024-07-22 18:41:42.286267] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:34:30.427 [2024-07-22 18:41:42.286278] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:30.427 [2024-07-22 18:41:42.286289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:30.427 [2024-07-22 18:41:42.286306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:30.427 [2024-07-22 18:41:42.286317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:30.427 [2024-07-22 18:41:42.286327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:30.427 [2024-07-22 18:41:42.286338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.427 [2024-07-22 18:41:42.286349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:30.427 [2024-07-22 18:41:42.286361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.406 ms 00:34:30.427 [2024-07-22 18:41:42.286372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.427 [2024-07-22 18:41:42.303260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.427 [2024-07-22 18:41:42.303315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:30.427 [2024-07-22 18:41:42.303331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.865 ms 00:34:30.427 [2024-07-22 18:41:42.303343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.427 [2024-07-22 18:41:42.303847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.428 [2024-07-22 18:41:42.303878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:30.428 [2024-07-22 18:41:42.303892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:34:30.428 [2024-07-22 18:41:42.303903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.428 [2024-07-22 18:41:42.342199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.428 [2024-07-22 18:41:42.342254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:30.428 [2024-07-22 18:41:42.342277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.428 [2024-07-22 18:41:42.342289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.428 [2024-07-22 18:41:42.342361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.428 [2024-07-22 18:41:42.342376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:30.428 [2024-07-22 18:41:42.342389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.428 [2024-07-22 18:41:42.342399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.428 [2024-07-22 18:41:42.342475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.428 [2024-07-22 18:41:42.342494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:30.428 [2024-07-22 18:41:42.342506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.428 [2024-07-22 18:41:42.342523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.428 [2024-07-22 18:41:42.342546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.428 [2024-07-22 18:41:42.342560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:30.428 [2024-07-22 18:41:42.342572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.428 [2024-07-22 18:41:42.342582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.448401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.448472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:30.686 [2024-07-22 18:41:42.448499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.448511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.533947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.534021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:30.686 [2024-07-22 18:41:42.534041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.534053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.534138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.534156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:30.686 [2024-07-22 18:41:42.534169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.534180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.534237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.534258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:30.686 [2024-07-22 18:41:42.534270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.534281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.534394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.534414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:30.686 [2024-07-22 18:41:42.534427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.534438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.534476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.534499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:30.686 [2024-07-22 18:41:42.534511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.534522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.534568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.534584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:30.686 [2024-07-22 18:41:42.534597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.534607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.534663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.686 [2024-07-22 18:41:42.534712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:30.686 [2024-07-22 18:41:42.534728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.686 [2024-07-22 18:41:42.534739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.686 [2024-07-22 18:41:42.534889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 254.860 ms, result 0 00:34:31.620 00:34:31.620 00:34:31.879 18:41:43 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:33.822 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:33.822 18:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:33.822 18:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:33.822 18:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:34.080 18:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:34.080 18:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:34.081 Process with pid 87108 is not found 00:34:34.081 Remove shared memory files 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 87108 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 87108 ']' 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 87108 00:34:34.081 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (87108) - No such process 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 87108 is not found' 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_band_md /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_l2p_l1 /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_l2p_l2 /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_l2p_l2_ctx /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_nvc_md /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_p2l_pool /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_sb /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_sb_shm /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_trim_bitmap /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_trim_log /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_trim_md /dev/hugepages/ftl_b92e00d0-5341-470f-9738-7edc0d43ccba_vmap 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:34.081 00:34:34.081 real 3m12.850s 00:34:34.081 user 2m58.825s 00:34:34.081 sys 0m16.240s 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:34.081 18:41:45 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:34.081 ************************************ 00:34:34.081 END TEST ftl_restore_fast 00:34:34.081 ************************************ 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@1142 -- # return 0 00:34:34.081 18:41:46 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:34.081 18:41:46 ftl -- ftl/ftl.sh@14 -- # killprocess 79212 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@948 -- # '[' -z 79212 ']' 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@952 -- # kill -0 79212 00:34:34.081 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79212) - No such process 00:34:34.081 Process with pid 79212 is not found 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79212 is not found' 00:34:34.081 18:41:46 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:34.081 18:41:46 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=89023 00:34:34.081 18:41:46 ftl -- ftl/ftl.sh@20 -- # waitforlisten 89023 00:34:34.081 18:41:46 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@829 -- # '[' -z 89023 ']' 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:34.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:34.081 18:41:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:34.339 [2024-07-22 18:41:46.141074] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:34.339 [2024-07-22 18:41:46.141277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89023 ] 00:34:34.339 [2024-07-22 18:41:46.316275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.598 [2024-07-22 18:41:46.565562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.531 18:41:47 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:35.531 18:41:47 ftl -- common/autotest_common.sh@862 -- # return 0 00:34:35.531 18:41:47 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:35.789 nvme0n1 00:34:35.789 18:41:47 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:35.789 18:41:47 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:35.789 18:41:47 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:36.047 18:41:47 ftl -- ftl/common.sh@28 -- # stores=4f919917-383c-4ad1-b38e-df03a2aad0ae 00:34:36.047 18:41:47 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:36.047 18:41:47 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f919917-383c-4ad1-b38e-df03a2aad0ae 00:34:36.305 18:41:48 ftl -- ftl/ftl.sh@23 -- # killprocess 89023 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@948 -- # '[' -z 89023 ']' 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@952 -- # kill -0 89023 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@953 -- # uname 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89023 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:36.305 killing process with pid 89023 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89023' 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@967 -- # kill 89023 00:34:36.305 18:41:48 ftl -- common/autotest_common.sh@972 -- # wait 89023 00:34:38.834 18:41:50 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:38.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:38.834 Waiting for block devices as requested 00:34:38.834 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:38.834 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:39.092 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:39.092 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:44.467 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:44.467 Remove shared memory files 00:34:44.467 18:41:56 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:44.467 18:41:56 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:44.467 18:41:56 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:44.467 18:41:56 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:44.467 18:41:56 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:44.467 18:41:56 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:44.467 18:41:56 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:44.467 00:34:44.467 real 15m7.533s 00:34:44.467 user 17m52.534s 00:34:44.467 sys 1m51.493s 00:34:44.467 18:41:56 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:44.467 18:41:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:44.467 ************************************ 00:34:44.467 END TEST ftl 00:34:44.467 ************************************ 00:34:44.467 18:41:56 -- common/autotest_common.sh@1142 -- # return 0 00:34:44.467 18:41:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:44.467 18:41:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:44.467 18:41:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:34:44.467 18:41:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:34:44.467 18:41:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:34:44.467 18:41:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:44.467 18:41:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:44.467 18:41:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:44.467 18:41:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:34:44.467 18:41:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:34:44.467 18:41:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:44.467 18:41:56 -- common/autotest_common.sh@10 -- # set +x 00:34:44.467 18:41:56 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:34:44.467 18:41:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:44.467 18:41:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:44.467 18:41:56 -- common/autotest_common.sh@10 -- # set +x 00:34:45.401 INFO: APP EXITING 00:34:45.401 INFO: killing all VMs 00:34:45.401 INFO: killing vhost app 00:34:45.401 INFO: EXIT DONE 00:34:45.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:46.226 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:46.226 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:46.226 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:46.226 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:46.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:47.051 Cleaning 00:34:47.051 Removing: /var/run/dpdk/spdk0/config 00:34:47.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:47.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:47.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:47.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:47.051 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:47.051 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:47.051 Removing: /var/run/dpdk/spdk0 00:34:47.051 Removing: /var/run/dpdk/spdk_pid61836 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62062 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62283 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62388 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62439 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62567 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62596 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62771 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62880 00:34:47.051 Removing: /var/run/dpdk/spdk_pid62974 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63088 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63188 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63227 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63269 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63337 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63445 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63905 00:34:47.051 Removing: /var/run/dpdk/spdk_pid63975 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64049 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64066 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64213 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64235 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64383 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64404 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64474 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64492 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64556 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64574 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64761 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64803 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64879 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64954 00:34:47.051 Removing: /var/run/dpdk/spdk_pid64991 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65069 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65115 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65162 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65214 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65255 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65307 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65348 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65395 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65441 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65488 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65534 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65581 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65632 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65674 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65725 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65767 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65814 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65863 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65917 00:34:47.052 Removing: /var/run/dpdk/spdk_pid65965 00:34:47.052 Removing: /var/run/dpdk/spdk_pid66007 00:34:47.052 Removing: /var/run/dpdk/spdk_pid66094 00:34:47.052 Removing: /var/run/dpdk/spdk_pid66215 00:34:47.052 Removing: /var/run/dpdk/spdk_pid66383 00:34:47.052 Removing: /var/run/dpdk/spdk_pid66478 00:34:47.052 Removing: /var/run/dpdk/spdk_pid66526 00:34:47.052 Removing: /var/run/dpdk/spdk_pid66991 00:34:47.052 Removing: /var/run/dpdk/spdk_pid67095 00:34:47.052 Removing: /var/run/dpdk/spdk_pid67210 00:34:47.052 Removing: /var/run/dpdk/spdk_pid67267 00:34:47.052 Removing: /var/run/dpdk/spdk_pid67294 00:34:47.052 Removing: /var/run/dpdk/spdk_pid67374 00:34:47.052 Removing: /var/run/dpdk/spdk_pid68013 00:34:47.052 Removing: /var/run/dpdk/spdk_pid68062 00:34:47.052 Removing: /var/run/dpdk/spdk_pid68573 00:34:47.052 Removing: /var/run/dpdk/spdk_pid68678 00:34:47.052 Removing: /var/run/dpdk/spdk_pid68798 00:34:47.052 Removing: /var/run/dpdk/spdk_pid68851 00:34:47.052 Removing: /var/run/dpdk/spdk_pid68882 00:34:47.310 Removing: /var/run/dpdk/spdk_pid68913 00:34:47.310 Removing: /var/run/dpdk/spdk_pid70794 00:34:47.310 Removing: /var/run/dpdk/spdk_pid70941 00:34:47.310 Removing: /var/run/dpdk/spdk_pid70945 00:34:47.310 Removing: /var/run/dpdk/spdk_pid70957 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71006 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71011 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71023 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71068 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71072 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71084 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71130 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71134 00:34:47.310 Removing: /var/run/dpdk/spdk_pid71146 00:34:47.310 Removing: /var/run/dpdk/spdk_pid72496 00:34:47.310 Removing: /var/run/dpdk/spdk_pid72596 00:34:47.310 Removing: /var/run/dpdk/spdk_pid74000 00:34:47.310 Removing: /var/run/dpdk/spdk_pid75339 00:34:47.310 Removing: /var/run/dpdk/spdk_pid75474 00:34:47.310 Removing: /var/run/dpdk/spdk_pid75600 00:34:47.310 Removing: /var/run/dpdk/spdk_pid75722 00:34:47.310 Removing: /var/run/dpdk/spdk_pid75871 00:34:47.310 Removing: /var/run/dpdk/spdk_pid75953 00:34:47.310 Removing: /var/run/dpdk/spdk_pid76093 00:34:47.310 Removing: /var/run/dpdk/spdk_pid76472 00:34:47.310 Removing: /var/run/dpdk/spdk_pid76510 00:34:47.310 Removing: /var/run/dpdk/spdk_pid76983 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77168 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77271 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77388 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77447 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77478 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77761 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77825 00:34:47.310 Removing: /var/run/dpdk/spdk_pid77898 00:34:47.310 Removing: /var/run/dpdk/spdk_pid78286 00:34:47.310 Removing: /var/run/dpdk/spdk_pid78428 00:34:47.310 Removing: /var/run/dpdk/spdk_pid79212 00:34:47.310 Removing: /var/run/dpdk/spdk_pid79352 00:34:47.310 Removing: /var/run/dpdk/spdk_pid79555 00:34:47.310 Removing: /var/run/dpdk/spdk_pid79653 00:34:47.310 Removing: /var/run/dpdk/spdk_pid80017 00:34:47.310 Removing: /var/run/dpdk/spdk_pid80298 00:34:47.310 Removing: /var/run/dpdk/spdk_pid80662 00:34:47.310 Removing: /var/run/dpdk/spdk_pid80858 00:34:47.310 Removing: /var/run/dpdk/spdk_pid81005 00:34:47.310 Removing: /var/run/dpdk/spdk_pid81069 00:34:47.310 Removing: /var/run/dpdk/spdk_pid81207 00:34:47.310 Removing: /var/run/dpdk/spdk_pid81242 00:34:47.311 Removing: /var/run/dpdk/spdk_pid81309 00:34:47.311 Removing: /var/run/dpdk/spdk_pid81523 00:34:47.311 Removing: /var/run/dpdk/spdk_pid81771 00:34:47.311 Removing: /var/run/dpdk/spdk_pid82185 00:34:47.311 Removing: /var/run/dpdk/spdk_pid82639 00:34:47.311 Removing: /var/run/dpdk/spdk_pid83075 00:34:47.311 Removing: /var/run/dpdk/spdk_pid83580 00:34:47.311 Removing: /var/run/dpdk/spdk_pid83722 00:34:47.311 Removing: /var/run/dpdk/spdk_pid83832 00:34:47.311 Removing: /var/run/dpdk/spdk_pid84517 00:34:47.311 Removing: /var/run/dpdk/spdk_pid84603 00:34:47.311 Removing: /var/run/dpdk/spdk_pid85084 00:34:47.311 Removing: /var/run/dpdk/spdk_pid85511 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86009 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86133 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86186 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86256 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86323 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86393 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86604 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86677 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86751 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86831 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86866 00:34:47.311 Removing: /var/run/dpdk/spdk_pid86939 00:34:47.311 Removing: /var/run/dpdk/spdk_pid87108 00:34:47.311 Removing: /var/run/dpdk/spdk_pid87337 00:34:47.311 Removing: /var/run/dpdk/spdk_pid87739 00:34:47.311 Removing: /var/run/dpdk/spdk_pid88155 00:34:47.311 Removing: /var/run/dpdk/spdk_pid88571 00:34:47.311 Removing: /var/run/dpdk/spdk_pid89023 00:34:47.311 Clean 00:34:47.569 18:41:59 -- common/autotest_common.sh@1451 -- # return 0 00:34:47.569 18:41:59 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:34:47.569 18:41:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:47.569 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:34:47.569 18:41:59 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:34:47.569 18:41:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:47.569 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:34:47.569 18:41:59 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:47.569 18:41:59 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:47.569 18:41:59 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:47.569 18:41:59 -- spdk/autotest.sh@391 -- # hash lcov 00:34:47.569 18:41:59 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:47.569 18:41:59 -- spdk/autotest.sh@393 -- # hostname 00:34:47.569 18:41:59 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:47.828 geninfo: WARNING: invalid characters removed from testname! 00:35:14.546 18:42:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:17.830 18:42:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:20.362 18:42:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:22.895 18:42:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:25.432 18:42:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:28.717 18:42:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:31.251 18:42:42 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:31.251 18:42:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:31.251 18:42:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:31.251 18:42:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.251 18:42:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.251 18:42:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.251 18:42:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.251 18:42:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.251 18:42:42 -- paths/export.sh@5 -- $ export PATH 00:35:31.251 18:42:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.251 18:42:42 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:31.251 18:42:42 -- common/autobuild_common.sh@447 -- $ date +%s 00:35:31.251 18:42:42 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721673762.XXXXXX 00:35:31.251 18:42:42 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721673762.xhWvtx 00:35:31.251 18:42:42 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:35:31.251 18:42:42 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:35:31.251 18:42:42 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:35:31.251 18:42:42 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:31.252 18:42:42 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:31.252 18:42:42 -- common/autobuild_common.sh@463 -- $ get_config_params 00:35:31.252 18:42:42 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:35:31.252 18:42:42 -- common/autotest_common.sh@10 -- $ set +x 00:35:31.252 18:42:42 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:35:31.252 18:42:42 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:35:31.252 18:42:42 -- pm/common@17 -- $ local monitor 00:35:31.252 18:42:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:31.252 18:42:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:31.252 18:42:42 -- pm/common@25 -- $ sleep 1 00:35:31.252 18:42:42 -- pm/common@21 -- $ date +%s 00:35:31.252 18:42:42 -- pm/common@21 -- $ date +%s 00:35:31.252 18:42:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721673762 00:35:31.252 18:42:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721673762 00:35:31.252 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721673762_collect-vmstat.pm.log 00:35:31.252 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721673762_collect-cpu-load.pm.log 00:35:31.817 18:42:43 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:35:31.817 18:42:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:35:31.817 18:42:43 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:35:31.817 18:42:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:31.817 18:42:43 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:31.817 18:42:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:31.817 18:42:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:31.817 18:42:43 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:31.817 18:42:43 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:31.817 18:42:43 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:32.076 18:42:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:32.076 18:42:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:32.076 18:42:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:32.076 18:42:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:32.076 18:42:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:32.076 18:42:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:32.076 18:42:43 -- pm/common@44 -- $ pid=90720 00:35:32.076 18:42:43 -- pm/common@50 -- $ kill -TERM 90720 00:35:32.076 18:42:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:32.076 18:42:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:32.076 18:42:43 -- pm/common@44 -- $ pid=90721 00:35:32.076 18:42:43 -- pm/common@50 -- $ kill -TERM 90721 00:35:32.076 + [[ -n 5145 ]] 00:35:32.076 + sudo kill 5145 00:35:33.019 [Pipeline] } 00:35:33.040 [Pipeline] // timeout 00:35:33.046 [Pipeline] } 00:35:33.065 [Pipeline] // stage 00:35:33.071 [Pipeline] } 00:35:33.089 [Pipeline] // catchError 00:35:33.098 [Pipeline] stage 00:35:33.100 [Pipeline] { (Stop VM) 00:35:33.113 [Pipeline] sh 00:35:33.393 + vagrant halt 00:35:37.607 ==> default: Halting domain... 00:35:42.909 [Pipeline] sh 00:35:43.222 + vagrant destroy -f 00:35:47.407 ==> default: Removing domain... 00:35:47.419 [Pipeline] sh 00:35:47.697 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:47.706 [Pipeline] } 00:35:47.723 [Pipeline] // stage 00:35:47.728 [Pipeline] } 00:35:47.744 [Pipeline] // dir 00:35:47.748 [Pipeline] } 00:35:47.765 [Pipeline] // wrap 00:35:47.770 [Pipeline] } 00:35:47.785 [Pipeline] // catchError 00:35:47.793 [Pipeline] stage 00:35:47.795 [Pipeline] { (Epilogue) 00:35:47.806 [Pipeline] sh 00:35:48.085 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:54.683 [Pipeline] catchError 00:35:54.685 [Pipeline] { 00:35:54.699 [Pipeline] sh 00:35:54.979 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:54.979 Artifacts sizes are good 00:35:54.988 [Pipeline] } 00:35:55.005 [Pipeline] // catchError 00:35:55.019 [Pipeline] archiveArtifacts 00:35:55.027 Archiving artifacts 00:35:55.149 [Pipeline] cleanWs 00:35:55.158 [WS-CLEANUP] Deleting project workspace... 00:35:55.158 [WS-CLEANUP] Deferred wipeout is used... 00:35:55.164 [WS-CLEANUP] done 00:35:55.165 [Pipeline] } 00:35:55.178 [Pipeline] // stage 00:35:55.182 [Pipeline] } 00:35:55.197 [Pipeline] // node 00:35:55.201 [Pipeline] End of Pipeline 00:35:55.237 Finished: SUCCESS